Test Report: QEMU_macOS 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.2
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 10.01
27 TestAddons/Setup 10.23
28 TestCertOptions 10.27
29 TestCertExpiration 195.36
30 TestDockerFlags 10.14
31 TestForceSystemdFlag 10.29
32 TestForceSystemdEnv 10.56
38 TestErrorSpam/setup 9.91
47 TestFunctional/serial/StartWithProxy 10.02
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 2.25
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.04
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.07
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.3
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 105.62
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.89
142 TestMultiControlPlane/serial/DeployApp 100.79
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 38.81
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.44
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 2.01
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.93
165 TestJSONOutput/start/Command 9.96
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.3
197 TestMountStart/serial/StartWithMountFirst 10.12
200 TestMultiNode/serial/FreshStart2Nodes 9.91
201 TestMultiNode/serial/DeployApp2Nodes 113.96
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 54.86
209 TestMultiNode/serial/RestartKeepsNodes 8.43
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.69
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.35
217 TestPreload 10.18
219 TestScheduledStopUnix 10.03
220 TestSkaffold 12.34
223 TestRunningBinaryUpgrade 593.07
225 TestKubernetesUpgrade 18.85
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.97
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.33
241 TestStoppedBinaryUpgrade/Upgrade 575.16
243 TestPause/serial/Start 9.99
253 TestNoKubernetes/serial/StartWithK8s 9.87
254 TestNoKubernetes/serial/StartWithStopK8s 5.3
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.32
261 TestNetworkPlugins/group/auto/Start 9.89
262 TestNetworkPlugins/group/kindnet/Start 9.96
263 TestNetworkPlugins/group/calico/Start 10.05
264 TestNetworkPlugins/group/custom-flannel/Start 9.81
265 TestNetworkPlugins/group/false/Start 9.84
266 TestNetworkPlugins/group/enable-default-cni/Start 9.85
267 TestNetworkPlugins/group/flannel/Start 9.88
268 TestNetworkPlugins/group/bridge/Start 9.76
269 TestNetworkPlugins/group/kubenet/Start 9.91
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.88
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.88
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.2
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.05
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
292 TestStartStop/group/embed-certs/serial/FirstStart 10.13
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.11
294 TestStartStop/group/no-preload/serial/Pause 0.12
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.81
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 5.26
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.31
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 10.17
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (15.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-437000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-437000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.199928292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1c5a4827-bd97-4103-85e1-5e0df8482b35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-437000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1b80aed-aa13-4ad4-899c-e68998f2d570","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"95b0e921-69d9-44de-a7dd-e23799e80ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig"}}
	{"specversion":"1.0","id":"2d7b1caa-1e18-4577-9f44-fdc5f10f41bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"041ecbc1-de8a-42f0-92e1-a8396380a811","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4cd50517-9f48-45e2-8af2-ac8596e92fcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube"}}
	{"specversion":"1.0","id":"0048b14d-a41e-41fa-b99a-fe4382dab434","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"637b72ad-6ac0-498a-846f-52885b62b524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d69a3da-2f5c-46e6-9b84-010ec807d9f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f50a4f5b-18fd-4af5-9bfb-9b298c7c5dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"797689eb-625f-4764-9858-7672d1d780b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-437000\" primary control-plane node in \"download-only-437000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"556270f6-abeb-4e06-95e9-8157049f8925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e72d22e-5582-48b7-8e8b-9ae3e58ace88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0] Decompressors:map[bz2:0x14000a057f0 gz:0x14000a057f8 tar:0x14000a057a0 tar.bz2:0x14000a057b0 tar.gz:0x14000a057c0 tar.xz:0x14000a057d0 tar.zst:0x14000a057e0 tbz2:0x14000a057b0 tgz:0x14
000a057c0 txz:0x14000a057d0 tzst:0x14000a057e0 xz:0x14000a05800 zip:0x14000a05810 zst:0x14000a05808] Getters:map[file:0x140000647a0 http:0x140008b6140 https:0x140008b6190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a18d0d8e-77bd-404d-9743-9c7249076617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:19:05.270721    7122 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:19:05.270881    7122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:05.270885    7122 out.go:358] Setting ErrFile to fd 2...
	I0923 03:19:05.270887    7122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:05.271033    7122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	W0923 03:19:05.271122    7122 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19689-6600/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19689-6600/.minikube/config/config.json: no such file or directory
	I0923 03:19:05.272516    7122 out.go:352] Setting JSON to true
	I0923 03:19:05.289805    7122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4716,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:19:05.289872    7122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:19:05.296262    7122 out.go:97] [download-only-437000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:19:05.296381    7122 notify.go:220] Checking for updates...
	W0923 03:19:05.296448    7122 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 03:19:05.299992    7122 out.go:169] MINIKUBE_LOCATION=19689
	I0923 03:19:05.303456    7122 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:19:05.309290    7122 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:19:05.313309    7122 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:19:05.317189    7122 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	W0923 03:19:05.329274    7122 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 03:19:05.329498    7122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:19:05.332370    7122 out.go:97] Using the qemu2 driver based on user configuration
	I0923 03:19:05.332388    7122 start.go:297] selected driver: qemu2
	I0923 03:19:05.332397    7122 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:19:05.332474    7122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:19:05.335274    7122 out.go:169] Automatically selected the socket_vmnet network
	I0923 03:19:05.340848    7122 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 03:19:05.340961    7122 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:19:05.341022    7122 cni.go:84] Creating CNI manager for ""
	I0923 03:19:05.341068    7122 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 03:19:05.341134    7122 start.go:340] cluster config:
	{Name:download-only-437000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:19:05.345122    7122 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:19:05.349296    7122 out.go:97] Downloading VM boot image ...
	I0923 03:19:05.349316    7122 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0923 03:19:11.806673    7122 out.go:97] Starting "download-only-437000" primary control-plane node in "download-only-437000" cluster
	I0923 03:19:11.806696    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:11.864375    7122 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:19:11.864389    7122 cache.go:56] Caching tarball of preloaded images
	I0923 03:19:11.864561    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:11.868716    7122 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 03:19:11.868722    7122 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:11.943638    7122 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:19:19.129330    7122 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:19.129505    7122 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:19.825648    7122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 03:19:19.825844    7122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/download-only-437000/config.json ...
	I0923 03:19:19.825861    7122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/download-only-437000/config.json: {Name:mk2eb9b3f2689a5995386bf57780e3d7152cac5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:19:19.826085    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:19.827115    7122 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 03:19:20.386483    7122 out.go:193] 
	W0923 03:19:20.393353    7122 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0] Decompressors:map[bz2:0x14000a057f0 gz:0x14000a057f8 tar:0x14000a057a0 tar.bz2:0x14000a057b0 tar.gz:0x14000a057c0 tar.xz:0x14000a057d0 tar.zst:0x14000a057e0 tbz2:0x14000a057b0 tgz:0x14000a057c0 txz:0x14000a057d0 tzst:0x14000a057e0 xz:0x14000a05800 zip:0x14000a05810 zst:0x14000a05808] Getters:map[file:0x140000647a0 http:0x140008b6140 https:0x140008b6190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 03:19:20.393382    7122 out_reason.go:110] 
	W0923 03:19:20.403427    7122 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:19:20.406345    7122 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-437000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 03:19:28.566041    7121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-429000 --alsologtostderr --binary-mirror http://127.0.0.1:51036 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-429000 --alsologtostderr --binary-mirror http://127.0.0.1:51036 --driver=qemu2 : exit status 40 (155.426625ms)

                                                
                                                
-- stdout --
	* [binary-mirror-429000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-429000" primary control-plane node in "binary-mirror-429000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:19:28.625598    7185 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:19:28.625712    7185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:28.625716    7185 out.go:358] Setting ErrFile to fd 2...
	I0923 03:19:28.625719    7185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:28.625844    7185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:19:28.626940    7185 out.go:352] Setting JSON to false
	I0923 03:19:28.643084    7185 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4739,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:19:28.643156    7185 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:19:28.647448    7185 out.go:177] * [binary-mirror-429000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:19:28.654484    7185 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:19:28.654525    7185 notify.go:220] Checking for updates...
	I0923 03:19:28.662450    7185 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:19:28.666504    7185 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:19:28.669512    7185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:19:28.672467    7185 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:19:28.675716    7185 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:19:28.679401    7185 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:19:28.686513    7185 start.go:297] selected driver: qemu2
	I0923 03:19:28.686519    7185 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:19:28.686617    7185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:19:28.689415    7185 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:19:28.695849    7185 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 03:19:28.695935    7185 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:19:28.695960    7185 cni.go:84] Creating CNI manager for ""
	I0923 03:19:28.695987    7185 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:19:28.696003    7185 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:19:28.696052    7185 start.go:340] cluster config:
	{Name:binary-mirror-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:51036 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:19:28.699841    7185 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:19:28.707501    7185 out.go:177] * Starting "binary-mirror-429000" primary control-plane node in "binary-mirror-429000" cluster
	I0923 03:19:28.711508    7185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:19:28.711526    7185 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:19:28.711536    7185 cache.go:56] Caching tarball of preloaded images
	I0923 03:19:28.711632    7185 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:19:28.711638    7185 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:19:28.711844    7185 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/binary-mirror-429000/config.json ...
	I0923 03:19:28.711856    7185 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/binary-mirror-429000/config.json: {Name:mkbc75fa7eee6448479f5504f37c40777af91232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:19:28.712191    7185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:19:28.712246    7185 download.go:107] Downloading: http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0923 03:19:28.728690    7185 out.go:201] 
	W0923 03:19:28.732511    7185 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0] Decompressors:map[bz2:0x14000703770 gz:0x14000703778 tar:0x14000703720 tar.bz2:0x14000703730 tar.gz:0x14000703740 tar.xz:0x14000703750 tar.zst:0x14000703760 tbz2:0x14000703730 tgz:0x14000703740 txz:0x14000703750 tzst:0x14000703760 xz:0x14000703780 zip:0x14000703790 zst:0x14000703788] Getters:map[file:0x14000707130 http:0x14000599360 https:0x140005993b0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51036/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0 0x104c056c0] Decompressors:map[bz2:0x14000703770 gz:0x14000703778 tar:0x14000703720 tar.bz2:0x14000703730 tar.gz:0x14000703740 tar.xz:0x14000703750 tar.zst:0x14000703760 tbz2:0x14000703730 tgz:0x14000703740 txz:0x14000703750 tzst:0x14000703760 xz:0x14000703780 zip:0x14000703790 zst:0x14000703788] Getters:map[file:0x14000707130 http:0x14000599360 https:0x140005993b0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0923 03:19:28.732518    7185 out.go:270] * 
	* 
	W0923 03:19:28.733012    7185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:19:28.747455    7185 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-429000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:51036" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-429000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-429000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-430000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-430000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.873306833s)

                                                
                                                
-- stdout --
	* [offline-docker-430000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-430000" primary control-plane node in "offline-docker-430000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-430000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:30:49.111152    8808 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:30:49.111290    8808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:49.111293    8808 out.go:358] Setting ErrFile to fd 2...
	I0923 03:30:49.111303    8808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:49.111470    8808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:30:49.112581    8808 out.go:352] Setting JSON to false
	I0923 03:30:49.130382    8808 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5420,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:30:49.130465    8808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:30:49.134369    8808 out.go:177] * [offline-docker-430000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:30:49.141348    8808 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:30:49.141366    8808 notify.go:220] Checking for updates...
	I0923 03:30:49.148335    8808 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:30:49.151291    8808 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:30:49.154349    8808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:30:49.157404    8808 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:30:49.160394    8808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:30:49.163688    8808 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:30:49.163751    8808 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:30:49.167309    8808 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:30:49.174363    8808 start.go:297] selected driver: qemu2
	I0923 03:30:49.174375    8808 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:30:49.174384    8808 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:30:49.176314    8808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:30:49.179329    8808 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:30:49.182416    8808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:30:49.182433    8808 cni.go:84] Creating CNI manager for ""
	I0923 03:30:49.182456    8808 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:30:49.182461    8808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:30:49.182502    8808 start.go:340] cluster config:
	{Name:offline-docker-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:30:49.185966    8808 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:49.193328    8808 out.go:177] * Starting "offline-docker-430000" primary control-plane node in "offline-docker-430000" cluster
	I0923 03:30:49.197332    8808 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:30:49.197374    8808 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:30:49.197381    8808 cache.go:56] Caching tarball of preloaded images
	I0923 03:30:49.197462    8808 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:30:49.197468    8808 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:30:49.197541    8808 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/offline-docker-430000/config.json ...
	I0923 03:30:49.197552    8808 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/offline-docker-430000/config.json: {Name:mk4af25e980b170c408631faea655aa6dfb9df50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:30:49.197844    8808 start.go:360] acquireMachinesLock for offline-docker-430000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:30:49.197879    8808 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "offline-docker-430000"
	I0923 03:30:49.197891    8808 start.go:93] Provisioning new machine with config: &{Name:offline-docker-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:30:49.197919    8808 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:30:49.206327    8808 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:30:49.222478    8808 start.go:159] libmachine.API.Create for "offline-docker-430000" (driver="qemu2")
	I0923 03:30:49.222514    8808 client.go:168] LocalClient.Create starting
	I0923 03:30:49.222623    8808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:30:49.222668    8808 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:49.222683    8808 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:49.222726    8808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:30:49.222750    8808 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:49.222761    8808 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:49.223138    8808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:30:49.385689    8808 main.go:141] libmachine: Creating SSH key...
	I0923 03:30:49.431224    8808 main.go:141] libmachine: Creating Disk image...
	I0923 03:30:49.431239    8808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:30:49.431571    8808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:49.441287    8808 main.go:141] libmachine: STDOUT: 
	I0923 03:30:49.441313    8808 main.go:141] libmachine: STDERR: 
	I0923 03:30:49.441381    8808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2 +20000M
	I0923 03:30:49.454457    8808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:30:49.454477    8808 main.go:141] libmachine: STDERR: 
	I0923 03:30:49.454494    8808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:49.454501    8808 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:30:49.454513    8808 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:30:49.454547    8808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:88:78:bb:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:49.456144    8808 main.go:141] libmachine: STDOUT: 
	I0923 03:30:49.456158    8808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:30:49.456177    8808 client.go:171] duration metric: took 233.662541ms to LocalClient.Create
	I0923 03:30:51.458265    8808 start.go:128] duration metric: took 2.260383958s to createHost
	I0923 03:30:51.458308    8808 start.go:83] releasing machines lock for "offline-docker-430000", held for 2.2604735s
	W0923 03:30:51.458325    8808 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:51.468347    8808 out.go:177] * Deleting "offline-docker-430000" in qemu2 ...
	W0923 03:30:51.484080    8808 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:51.484091    8808 start.go:729] Will try again in 5 seconds ...
	I0923 03:30:56.486206    8808 start.go:360] acquireMachinesLock for offline-docker-430000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:30:56.486745    8808 start.go:364] duration metric: took 356.833µs to acquireMachinesLock for "offline-docker-430000"
	I0923 03:30:56.486890    8808 start.go:93] Provisioning new machine with config: &{Name:offline-docker-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:30:56.487209    8808 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:30:56.500934    8808 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:30:56.552461    8808 start.go:159] libmachine.API.Create for "offline-docker-430000" (driver="qemu2")
	I0923 03:30:56.552517    8808 client.go:168] LocalClient.Create starting
	I0923 03:30:56.552638    8808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:30:56.552717    8808 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:56.552731    8808 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:56.552790    8808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:30:56.552852    8808 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:56.552867    8808 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:56.553549    8808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:30:56.728516    8808 main.go:141] libmachine: Creating SSH key...
	I0923 03:30:56.876064    8808 main.go:141] libmachine: Creating Disk image...
	I0923 03:30:56.876071    8808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:30:56.876270    8808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:56.885434    8808 main.go:141] libmachine: STDOUT: 
	I0923 03:30:56.885461    8808 main.go:141] libmachine: STDERR: 
	I0923 03:30:56.885526    8808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2 +20000M
	I0923 03:30:56.893451    8808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:30:56.893464    8808 main.go:141] libmachine: STDERR: 
	I0923 03:30:56.893477    8808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:56.893482    8808 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:30:56.893489    8808 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:30:56.893513    8808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a4:e9:2b:51:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/offline-docker-430000/disk.qcow2
	I0923 03:30:56.895107    8808 main.go:141] libmachine: STDOUT: 
	I0923 03:30:56.895119    8808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:30:56.895134    8808 client.go:171] duration metric: took 342.618458ms to LocalClient.Create
	I0923 03:30:58.897307    8808 start.go:128] duration metric: took 2.41009325s to createHost
	I0923 03:30:58.897414    8808 start.go:83] releasing machines lock for "offline-docker-430000", held for 2.41069275s
	W0923 03:30:58.898450    8808 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:58.918686    8808 out.go:201] 
	W0923 03:30:58.922654    8808 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:30:58.922705    8808 out.go:270] * 
	* 
	W0923 03:30:58.925496    8808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:30:58.940536    8808 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-430000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-23 03:30:58.961233 -0700 PDT m=+713.783649793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-430000 -n offline-docker-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-430000 -n offline-docker-430000: exit status 7 (50.624917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-430000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-430000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-430000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-858000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-858000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.223038208s)

                                                
                                                
-- stdout --
	* [addons-858000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-858000" primary control-plane node in "addons-858000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-858000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:19:28.915984    7199 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:19:28.916130    7199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:28.916133    7199 out.go:358] Setting ErrFile to fd 2...
	I0923 03:19:28.916136    7199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:28.916286    7199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:19:28.917369    7199 out.go:352] Setting JSON to false
	I0923 03:19:28.933484    7199 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4739,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:19:28.933545    7199 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:19:28.938511    7199 out.go:177] * [addons-858000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:19:28.945535    7199 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:19:28.945583    7199 notify.go:220] Checking for updates...
	I0923 03:19:28.952476    7199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:19:28.955437    7199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:19:28.958527    7199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:19:28.961484    7199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:19:28.964477    7199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:19:28.967618    7199 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:19:28.971357    7199 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:19:28.978459    7199 start.go:297] selected driver: qemu2
	I0923 03:19:28.978465    7199 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:19:28.978473    7199 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:19:28.980923    7199 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:19:28.984455    7199 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:19:28.987559    7199 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:19:28.987585    7199 cni.go:84] Creating CNI manager for ""
	I0923 03:19:28.987613    7199 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:19:28.987618    7199 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:19:28.987657    7199 start.go:340] cluster config:
	{Name:addons-858000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-858000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:19:28.991503    7199 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:19:28.997397    7199 out.go:177] * Starting "addons-858000" primary control-plane node in "addons-858000" cluster
	I0923 03:19:29.001465    7199 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:19:29.001481    7199 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:19:29.001488    7199 cache.go:56] Caching tarball of preloaded images
	I0923 03:19:29.001549    7199 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:19:29.001554    7199 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:19:29.001785    7199 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/addons-858000/config.json ...
	I0923 03:19:29.001798    7199 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/addons-858000/config.json: {Name:mk7daf42b25064af57866b75b4669374dd83f0b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:19:29.002208    7199 start.go:360] acquireMachinesLock for addons-858000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:19:29.002291    7199 start.go:364] duration metric: took 77.375µs to acquireMachinesLock for "addons-858000"
	I0923 03:19:29.002318    7199 start.go:93] Provisioning new machine with config: &{Name:addons-858000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-858000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:19:29.002344    7199 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:19:29.011501    7199 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 03:19:29.031417    7199 start.go:159] libmachine.API.Create for "addons-858000" (driver="qemu2")
	I0923 03:19:29.031450    7199 client.go:168] LocalClient.Create starting
	I0923 03:19:29.031602    7199 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:19:29.233213    7199 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:19:29.354276    7199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:19:29.557548    7199 main.go:141] libmachine: Creating SSH key...
	I0923 03:19:29.602420    7199 main.go:141] libmachine: Creating Disk image...
	I0923 03:19:29.602426    7199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:19:29.602650    7199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:29.611824    7199 main.go:141] libmachine: STDOUT: 
	I0923 03:19:29.611854    7199 main.go:141] libmachine: STDERR: 
	I0923 03:19:29.611914    7199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2 +20000M
	I0923 03:19:29.619777    7199 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:19:29.619802    7199 main.go:141] libmachine: STDERR: 
	I0923 03:19:29.619815    7199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:29.619819    7199 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:19:29.619857    7199 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:19:29.619887    7199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4d:2e:94:de:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:29.621548    7199 main.go:141] libmachine: STDOUT: 
	I0923 03:19:29.621562    7199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:19:29.621591    7199 client.go:171] duration metric: took 590.138667ms to LocalClient.Create
	I0923 03:19:31.623810    7199 start.go:128] duration metric: took 2.621493s to createHost
	I0923 03:19:31.623897    7199 start.go:83] releasing machines lock for "addons-858000", held for 2.62165525s
	W0923 03:19:31.623939    7199 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:19:31.639337    7199 out.go:177] * Deleting "addons-858000" in qemu2 ...
	W0923 03:19:31.663855    7199 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:19:31.663893    7199 start.go:729] Will try again in 5 seconds ...
	I0923 03:19:36.666200    7199 start.go:360] acquireMachinesLock for addons-858000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:19:36.666682    7199 start.go:364] duration metric: took 382µs to acquireMachinesLock for "addons-858000"
	I0923 03:19:36.666813    7199 start.go:93] Provisioning new machine with config: &{Name:addons-858000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-858000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:19:36.667151    7199 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:19:36.676567    7199 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 03:19:36.726851    7199 start.go:159] libmachine.API.Create for "addons-858000" (driver="qemu2")
	I0923 03:19:36.726928    7199 client.go:168] LocalClient.Create starting
	I0923 03:19:36.727100    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:19:36.727176    7199 main.go:141] libmachine: Decoding PEM data...
	I0923 03:19:36.727198    7199 main.go:141] libmachine: Parsing certificate...
	I0923 03:19:36.727294    7199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:19:36.727358    7199 main.go:141] libmachine: Decoding PEM data...
	I0923 03:19:36.727373    7199 main.go:141] libmachine: Parsing certificate...
	I0923 03:19:36.728026    7199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:19:36.915069    7199 main.go:141] libmachine: Creating SSH key...
	I0923 03:19:37.046043    7199 main.go:141] libmachine: Creating Disk image...
	I0923 03:19:37.046053    7199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:19:37.046288    7199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:37.055986    7199 main.go:141] libmachine: STDOUT: 
	I0923 03:19:37.056008    7199 main.go:141] libmachine: STDERR: 
	I0923 03:19:37.056081    7199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2 +20000M
	I0923 03:19:37.063813    7199 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:19:37.063831    7199 main.go:141] libmachine: STDERR: 
	I0923 03:19:37.063843    7199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:37.063848    7199 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:19:37.063860    7199 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:19:37.063892    7199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:49:28:ff:1d:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/addons-858000/disk.qcow2
	I0923 03:19:37.065496    7199 main.go:141] libmachine: STDOUT: 
	I0923 03:19:37.065510    7199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:19:37.065525    7199 client.go:171] duration metric: took 338.588334ms to LocalClient.Create
	I0923 03:19:39.067649    7199 start.go:128] duration metric: took 2.400509208s to createHost
	I0923 03:19:39.067706    7199 start.go:83] releasing machines lock for "addons-858000", held for 2.401048541s
	W0923 03:19:39.068064    7199 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-858000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-858000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:19:39.077634    7199 out.go:201] 
	W0923 03:19:39.083689    7199 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:19:39.083736    7199 out.go:270] * 
	* 
	W0923 03:19:39.086394    7199 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:19:39.095608    7199 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-858000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.23s)

                                                
                                    
x
+
TestCertOptions (10.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-903000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-903000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.002021541s)

                                                
                                                
-- stdout --
	* [cert-options-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-903000" primary control-plane node in "cert-options-903000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-903000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-903000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-903000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.55375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-903000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-903000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-903000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-903000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-903000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.636541ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-903000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-903000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-903000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-23 03:31:29.949477 -0700 PDT m=+744.772571334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-903000 -n cert-options-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-903000 -n cert-options-903000: exit status 7 (30.541667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-903000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-903000
--- FAIL: TestCertOptions (10.27s)

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.045416917s)

                                                
                                                
-- stdout --
	* [cert-expiration-413000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-413000" primary control-plane node in "cert-expiration-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.200983375s)

                                                
                                                
-- stdout --
	* [cert-expiration-413000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-413000" primary control-plane node in "cert-expiration-413000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-413000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-413000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-413000" primary control-plane node in "cert-expiration-413000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-23 03:34:29.915161 -0700 PDT m=+924.742760751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-413000 -n cert-expiration-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-413000 -n cert-expiration-413000: exit status 7 (31.148875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-413000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-506000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-506000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.904716125s)

                                                
                                                
-- stdout --
	* [docker-flags-506000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-506000" primary control-plane node in "docker-flags-506000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-506000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:31:09.678197    9007 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:31:09.678338    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:31:09.678342    9007 out.go:358] Setting ErrFile to fd 2...
	I0923 03:31:09.678344    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:31:09.678461    9007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:31:09.679531    9007 out.go:352] Setting JSON to false
	I0923 03:31:09.695899    9007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5440,"bootTime":1727082029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:31:09.695979    9007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:31:09.701691    9007 out.go:177] * [docker-flags-506000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:31:09.708504    9007 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:31:09.708572    9007 notify.go:220] Checking for updates...
	I0923 03:31:09.715515    9007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:31:09.718446    9007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:31:09.722478    9007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:31:09.725515    9007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:31:09.728506    9007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:31:09.731817    9007 config.go:182] Loaded profile config "force-systemd-flag-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:31:09.731883    9007 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:31:09.731940    9007 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:31:09.735494    9007 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:31:09.742524    9007 start.go:297] selected driver: qemu2
	I0923 03:31:09.742531    9007 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:31:09.742538    9007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:31:09.745002    9007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:31:09.748570    9007 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:31:09.751540    9007 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0923 03:31:09.751555    9007 cni.go:84] Creating CNI manager for ""
	I0923 03:31:09.751576    9007 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:31:09.751583    9007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:31:09.751610    9007 start.go:340] cluster config:
	{Name:docker-flags-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:31:09.755508    9007 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:31:09.764358    9007 out.go:177] * Starting "docker-flags-506000" primary control-plane node in "docker-flags-506000" cluster
	I0923 03:31:09.768473    9007 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:31:09.768498    9007 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:31:09.768505    9007 cache.go:56] Caching tarball of preloaded images
	I0923 03:31:09.768569    9007 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:31:09.768575    9007 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:31:09.768631    9007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/docker-flags-506000/config.json ...
	I0923 03:31:09.768643    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/docker-flags-506000/config.json: {Name:mkf97d2bf56d3bfdbd2a493d408fcde55a626107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:31:09.768883    9007 start.go:360] acquireMachinesLock for docker-flags-506000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:31:09.768923    9007 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "docker-flags-506000"
	I0923 03:31:09.768938    9007 start.go:93] Provisioning new machine with config: &{Name:docker-flags-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:31:09.768970    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:31:09.777421    9007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:31:09.795971    9007 start.go:159] libmachine.API.Create for "docker-flags-506000" (driver="qemu2")
	I0923 03:31:09.796003    9007 client.go:168] LocalClient.Create starting
	I0923 03:31:09.796077    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:31:09.796112    9007 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:09.796123    9007 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:09.796168    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:31:09.796193    9007 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:09.796199    9007 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:09.796547    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:31:09.960973    9007 main.go:141] libmachine: Creating SSH key...
	I0923 03:31:10.026280    9007 main.go:141] libmachine: Creating Disk image...
	I0923 03:31:10.026286    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:31:10.026482    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:10.035667    9007 main.go:141] libmachine: STDOUT: 
	I0923 03:31:10.035683    9007 main.go:141] libmachine: STDERR: 
	I0923 03:31:10.035742    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2 +20000M
	I0923 03:31:10.043542    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:31:10.043554    9007 main.go:141] libmachine: STDERR: 
	I0923 03:31:10.043570    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:10.043575    9007 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:31:10.043588    9007 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:31:10.043625    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:5f:a1:6e:ec:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:10.045257    9007 main.go:141] libmachine: STDOUT: 
	I0923 03:31:10.045268    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:31:10.045287    9007 client.go:171] duration metric: took 249.283375ms to LocalClient.Create
	I0923 03:31:12.047409    9007 start.go:128] duration metric: took 2.278474625s to createHost
	I0923 03:31:12.047455    9007 start.go:83] releasing machines lock for "docker-flags-506000", held for 2.278569416s
	W0923 03:31:12.047549    9007 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:12.068590    9007 out.go:177] * Deleting "docker-flags-506000" in qemu2 ...
	W0923 03:31:12.099063    9007 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:12.099079    9007 start.go:729] Will try again in 5 seconds ...
	I0923 03:31:17.101120    9007 start.go:360] acquireMachinesLock for docker-flags-506000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:31:17.101293    9007 start.go:364] duration metric: took 130.834µs to acquireMachinesLock for "docker-flags-506000"
	I0923 03:31:17.101407    9007 start.go:93] Provisioning new machine with config: &{Name:docker-flags-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:31:17.101560    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:31:17.120484    9007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:31:17.161375    9007 start.go:159] libmachine.API.Create for "docker-flags-506000" (driver="qemu2")
	I0923 03:31:17.161443    9007 client.go:168] LocalClient.Create starting
	I0923 03:31:17.161586    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:31:17.161653    9007 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:17.161673    9007 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:17.161736    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:31:17.161774    9007 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:17.161785    9007 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:17.162611    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:31:17.340891    9007 main.go:141] libmachine: Creating SSH key...
	I0923 03:31:17.487075    9007 main.go:141] libmachine: Creating Disk image...
	I0923 03:31:17.487082    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:31:17.487274    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:17.496828    9007 main.go:141] libmachine: STDOUT: 
	I0923 03:31:17.496849    9007 main.go:141] libmachine: STDERR: 
	I0923 03:31:17.496921    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2 +20000M
	I0923 03:31:17.504926    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:31:17.504942    9007 main.go:141] libmachine: STDERR: 
	I0923 03:31:17.504954    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:17.504959    9007 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:31:17.504982    9007 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:31:17.505018    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0b:43:2c:ef:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/docker-flags-506000/disk.qcow2
	I0923 03:31:17.506656    9007 main.go:141] libmachine: STDOUT: 
	I0923 03:31:17.506670    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:31:17.506683    9007 client.go:171] duration metric: took 345.231791ms to LocalClient.Create
	I0923 03:31:19.508820    9007 start.go:128] duration metric: took 2.407281458s to createHost
	I0923 03:31:19.508881    9007 start.go:83] releasing machines lock for "docker-flags-506000", held for 2.407622583s
	W0923 03:31:19.509270    9007 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:19.524809    9007 out.go:201] 
	W0923 03:31:19.528000    9007 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:31:19.528026    9007 out.go:270] * 
	* 
	W0923 03:31:19.530738    9007 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:31:19.542873    9007 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-506000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-506000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-506000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.59925ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-506000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-506000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-506000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-506000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-506000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-506000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-506000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-506000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-506000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.811709ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-506000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-506000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-506000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-506000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-506000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-506000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-23 03:31:19.679932 -0700 PDT m=+734.502801584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-506000 -n docker-flags-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-506000 -n docker-flags-506000: exit status 7 (28.983375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-506000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-506000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-506000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (10.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-635000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-635000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.096705542s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-635000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-635000" primary control-plane node in "force-systemd-flag-635000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:31:04.408142    8981 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:31:04.408511    8981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:31:04.408516    8981 out.go:358] Setting ErrFile to fd 2...
	I0923 03:31:04.408519    8981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:31:04.408725    8981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:31:04.410048    8981 out.go:352] Setting JSON to false
	I0923 03:31:04.426336    8981 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5435,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:31:04.426422    8981 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:31:04.432993    8981 out.go:177] * [force-systemd-flag-635000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:31:04.442038    8981 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:31:04.442086    8981 notify.go:220] Checking for updates...
	I0923 03:31:04.451972    8981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:31:04.454898    8981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:31:04.457967    8981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:31:04.460980    8981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:31:04.462443    8981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:31:04.466279    8981 config.go:182] Loaded profile config "force-systemd-env-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:31:04.466353    8981 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:31:04.466393    8981 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:31:04.470987    8981 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:31:04.483952    8981 start.go:297] selected driver: qemu2
	I0923 03:31:04.483957    8981 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:31:04.483963    8981 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:31:04.486376    8981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:31:04.490007    8981 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:31:04.493086    8981 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:31:04.493109    8981 cni.go:84] Creating CNI manager for ""
	I0923 03:31:04.493134    8981 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:31:04.493146    8981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:31:04.493181    8981 start.go:340] cluster config:
	{Name:force-systemd-flag-635000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:31:04.497023    8981 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:31:04.505037    8981 out.go:177] * Starting "force-systemd-flag-635000" primary control-plane node in "force-systemd-flag-635000" cluster
	I0923 03:31:04.508930    8981 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:31:04.508944    8981 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:31:04.508953    8981 cache.go:56] Caching tarball of preloaded images
	I0923 03:31:04.509022    8981 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:31:04.509027    8981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:31:04.509086    8981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/force-systemd-flag-635000/config.json ...
	I0923 03:31:04.509097    8981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/force-systemd-flag-635000/config.json: {Name:mk33466398735c6c729791ef31826b8f90704959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:31:04.509320    8981 start.go:360] acquireMachinesLock for force-systemd-flag-635000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:31:04.509357    8981 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "force-systemd-flag-635000"
	I0923 03:31:04.509371    8981 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:31:04.509395    8981 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:31:04.515911    8981 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:31:04.534414    8981 start.go:159] libmachine.API.Create for "force-systemd-flag-635000" (driver="qemu2")
	I0923 03:31:04.534451    8981 client.go:168] LocalClient.Create starting
	I0923 03:31:04.534518    8981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:31:04.534550    8981 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:04.534561    8981 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:04.534601    8981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:31:04.534625    8981 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:04.534634    8981 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:04.535065    8981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:31:04.701559    8981 main.go:141] libmachine: Creating SSH key...
	I0923 03:31:04.918016    8981 main.go:141] libmachine: Creating Disk image...
	I0923 03:31:04.918024    8981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:31:04.918287    8981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:04.927918    8981 main.go:141] libmachine: STDOUT: 
	I0923 03:31:04.927946    8981 main.go:141] libmachine: STDERR: 
	I0923 03:31:04.928021    8981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2 +20000M
	I0923 03:31:04.935978    8981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:31:04.935992    8981 main.go:141] libmachine: STDERR: 
	I0923 03:31:04.936011    8981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:04.936016    8981 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:31:04.936027    8981 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:31:04.936059    8981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:9a:e5:a7:5a:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:04.937701    8981 main.go:141] libmachine: STDOUT: 
	I0923 03:31:04.937714    8981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:31:04.937735    8981 client.go:171] duration metric: took 403.287875ms to LocalClient.Create
	I0923 03:31:06.939863    8981 start.go:128] duration metric: took 2.430505208s to createHost
	I0923 03:31:06.939932    8981 start.go:83] releasing machines lock for "force-systemd-flag-635000", held for 2.430615542s
	W0923 03:31:06.939993    8981 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:06.966079    8981 out.go:177] * Deleting "force-systemd-flag-635000" in qemu2 ...
	W0923 03:31:06.992805    8981 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:06.992826    8981 start.go:729] Will try again in 5 seconds ...
	I0923 03:31:11.994963    8981 start.go:360] acquireMachinesLock for force-systemd-flag-635000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:31:12.047551    8981 start.go:364] duration metric: took 52.465875ms to acquireMachinesLock for "force-systemd-flag-635000"
	I0923 03:31:12.047718    8981 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:31:12.047979    8981 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:31:12.059717    8981 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:31:12.108766    8981 start.go:159] libmachine.API.Create for "force-systemd-flag-635000" (driver="qemu2")
	I0923 03:31:12.108830    8981 client.go:168] LocalClient.Create starting
	I0923 03:31:12.108983    8981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:31:12.109062    8981 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:12.109080    8981 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:12.109145    8981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:31:12.109200    8981 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:12.109213    8981 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:12.111121    8981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:31:12.293939    8981 main.go:141] libmachine: Creating SSH key...
	I0923 03:31:12.402510    8981 main.go:141] libmachine: Creating Disk image...
	I0923 03:31:12.402516    8981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:31:12.402682    8981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:12.412217    8981 main.go:141] libmachine: STDOUT: 
	I0923 03:31:12.412240    8981 main.go:141] libmachine: STDERR: 
	I0923 03:31:12.412308    8981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2 +20000M
	I0923 03:31:12.420192    8981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:31:12.420205    8981 main.go:141] libmachine: STDERR: 
	I0923 03:31:12.420222    8981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:12.420226    8981 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:31:12.420233    8981 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:31:12.420265    8981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ac:5b:03:02:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-flag-635000/disk.qcow2
	I0923 03:31:12.421842    8981 main.go:141] libmachine: STDOUT: 
	I0923 03:31:12.421866    8981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:31:12.421881    8981 client.go:171] duration metric: took 313.042083ms to LocalClient.Create
	I0923 03:31:14.424009    8981 start.go:128] duration metric: took 2.376043333s to createHost
	I0923 03:31:14.424062    8981 start.go:83] releasing machines lock for "force-systemd-flag-635000", held for 2.376538959s
	W0923 03:31:14.424350    8981 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:14.443728    8981 out.go:201] 
	W0923 03:31:14.449657    8981 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:31:14.449711    8981 out.go:270] * 
	* 
	W0923 03:31:14.452604    8981 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:31:14.462470    8981 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-635000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-635000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-635000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.284375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-635000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-635000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-635000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-23 03:31:14.558608 -0700 PDT m=+729.381365209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-635000 -n force-systemd-flag-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-635000 -n force-systemd-flag-635000: exit status 7 (33.70275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-635000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-635000
--- FAIL: TestForceSystemdFlag (10.29s)

                                                
                                    
x
+
TestForceSystemdEnv (10.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-958000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0923 03:30:59.307290    7121 install.go:79] stdout: 
W0923 03:30:59.307445    7121 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 03:30:59.307463    7121 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit]
I0923 03:30:59.320428    7121 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit]
I0923 03:30:59.332130    7121 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit]
I0923 03:30:59.342394    7121 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit]
I0923 03:30:59.363254    7121 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 03:30:59.363402    7121 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0923 03:31:01.129910    7121 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0923 03:31:01.129935    7121 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0923 03:31:01.129985    7121 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 03:31:01.130021    7121 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit
I0923 03:31:01.528889    7121 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40] Decompressors:map[bz2:0x1400012b5d0 gz:0x1400012b5d8 tar:0x1400012b580 tar.bz2:0x1400012b590 tar.gz:0x1400012b5a0 tar.xz:0x1400012b5b0 tar.zst:0x1400012b5c0 tbz2:0x1400012b590 tgz:0x1400012b5a0 txz:0x1400012b5b0 tzst:0x1400012b5c0 xz:0x1400012b5e0 zip:0x1400012b5f0 zst:0x1400012b5e8] Getters:map[file:0x1400076f0f0 http:0x14000116e60 https:0x14000116eb0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 03:31:01.528992    7121 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit
I0923 03:31:04.337061    7121 install.go:79] stdout: 
W0923 03:31:04.337225    7121 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 03:31:04.337249    7121 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit]
I0923 03:31:04.351227    7121 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit]
I0923 03:31:04.362690    7121 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit]
I0923 03:31:04.371412    7121 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-958000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.366207s)

                                                
                                                
-- stdout --
	* [force-systemd-env-958000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-958000" primary control-plane node in "force-systemd-env-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:30:59.123131    8946 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:30:59.123251    8946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:59.123254    8946 out.go:358] Setting ErrFile to fd 2...
	I0923 03:30:59.123257    8946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:59.123377    8946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:30:59.124518    8946 out.go:352] Setting JSON to false
	I0923 03:30:59.141666    8946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5430,"bootTime":1727082029,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:30:59.141730    8946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:30:59.149101    8946 out.go:177] * [force-systemd-env-958000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:30:59.156994    8946 notify.go:220] Checking for updates...
	I0923 03:30:59.161880    8946 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:30:59.168818    8946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:30:59.176928    8946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:30:59.184923    8946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:30:59.192874    8946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:30:59.200966    8946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0923 03:30:59.205209    8946 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:30:59.205249    8946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:30:59.207887    8946 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:30:59.214840    8946 start.go:297] selected driver: qemu2
	I0923 03:30:59.214845    8946 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:30:59.214850    8946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:30:59.217052    8946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:30:59.220927    8946 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:30:59.224843    8946 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:30:59.224864    8946 cni.go:84] Creating CNI manager for ""
	I0923 03:30:59.224894    8946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:30:59.224905    8946 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:30:59.224938    8946 start.go:340] cluster config:
	{Name:force-systemd-env-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:30:59.228569    8946 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:59.236948    8946 out.go:177] * Starting "force-systemd-env-958000" primary control-plane node in "force-systemd-env-958000" cluster
	I0923 03:30:59.239921    8946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:30:59.239938    8946 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:30:59.239945    8946 cache.go:56] Caching tarball of preloaded images
	I0923 03:30:59.240010    8946 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:30:59.240016    8946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:30:59.240072    8946 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/force-systemd-env-958000/config.json ...
	I0923 03:30:59.240083    8946 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/force-systemd-env-958000/config.json: {Name:mk18a0d854670845c9560ccb0a94d0f0294d2308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:30:59.240291    8946 start.go:360] acquireMachinesLock for force-systemd-env-958000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:30:59.240327    8946 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "force-systemd-env-958000"
	I0923 03:30:59.240340    8946 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:30:59.240370    8946 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:30:59.248908    8946 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:30:59.265985    8946 start.go:159] libmachine.API.Create for "force-systemd-env-958000" (driver="qemu2")
	I0923 03:30:59.266024    8946 client.go:168] LocalClient.Create starting
	I0923 03:30:59.266084    8946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:30:59.266124    8946 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:59.266132    8946 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:59.266168    8946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:30:59.266191    8946 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:59.266200    8946 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:59.266545    8946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:30:59.432511    8946 main.go:141] libmachine: Creating SSH key...
	I0923 03:30:59.490382    8946 main.go:141] libmachine: Creating Disk image...
	I0923 03:30:59.490423    8946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:30:59.490664    8946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:30:59.500517    8946 main.go:141] libmachine: STDOUT: 
	I0923 03:30:59.500533    8946 main.go:141] libmachine: STDERR: 
	I0923 03:30:59.500613    8946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2 +20000M
	I0923 03:30:59.508964    8946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:30:59.508985    8946 main.go:141] libmachine: STDERR: 
	I0923 03:30:59.509001    8946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:30:59.509010    8946 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:30:59.509020    8946 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:30:59.509051    8946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e3:fb:63:04:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:30:59.510817    8946 main.go:141] libmachine: STDOUT: 
	I0923 03:30:59.510832    8946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:30:59.510854    8946 client.go:171] duration metric: took 244.829417ms to LocalClient.Create
	I0923 03:31:01.513077    8946 start.go:128] duration metric: took 2.2726735s to createHost
	I0923 03:31:01.513163    8946 start.go:83] releasing machines lock for "force-systemd-env-958000", held for 2.272865333s
	W0923 03:31:01.513271    8946 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:01.528615    8946 out.go:177] * Deleting "force-systemd-env-958000" in qemu2 ...
	W0923 03:31:01.562085    8946 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:01.562110    8946 start.go:729] Will try again in 5 seconds ...
	I0923 03:31:06.564273    8946 start.go:360] acquireMachinesLock for force-systemd-env-958000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:31:06.940087    8946 start.go:364] duration metric: took 375.676083ms to acquireMachinesLock for "force-systemd-env-958000"
	I0923 03:31:06.940195    8946 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:31:06.940425    8946 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:31:06.954988    8946 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 03:31:07.004164    8946 start.go:159] libmachine.API.Create for "force-systemd-env-958000" (driver="qemu2")
	I0923 03:31:07.004212    8946 client.go:168] LocalClient.Create starting
	I0923 03:31:07.004344    8946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:31:07.004422    8946 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:07.004437    8946 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:07.004497    8946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:31:07.004544    8946 main.go:141] libmachine: Decoding PEM data...
	I0923 03:31:07.004559    8946 main.go:141] libmachine: Parsing certificate...
	I0923 03:31:07.006503    8946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:31:07.189139    8946 main.go:141] libmachine: Creating SSH key...
	I0923 03:31:07.376100    8946 main.go:141] libmachine: Creating Disk image...
	I0923 03:31:07.376109    8946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:31:07.376338    8946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:31:07.386205    8946 main.go:141] libmachine: STDOUT: 
	I0923 03:31:07.386229    8946 main.go:141] libmachine: STDERR: 
	I0923 03:31:07.386295    8946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2 +20000M
	I0923 03:31:07.394415    8946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:31:07.394430    8946 main.go:141] libmachine: STDERR: 
	I0923 03:31:07.394450    8946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:31:07.394455    8946 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:31:07.394462    8946 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:31:07.394489    8946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4c:0c:35:c7:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/force-systemd-env-958000/disk.qcow2
	I0923 03:31:07.396110    8946 main.go:141] libmachine: STDOUT: 
	I0923 03:31:07.396122    8946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:31:07.396139    8946 client.go:171] duration metric: took 391.929292ms to LocalClient.Create
	I0923 03:31:09.398527    8946 start.go:128] duration metric: took 2.458082375s to createHost
	I0923 03:31:09.398652    8946 start.go:83] releasing machines lock for "force-systemd-env-958000", held for 2.458591875s
	W0923 03:31:09.398996    8946 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:31:09.419679    8946 out.go:201] 
	W0923 03:31:09.431583    8946 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:31:09.431614    8946 out.go:270] * 
	* 
	W0923 03:31:09.434372    8946 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:31:09.444492    8946 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-958000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-958000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-958000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.164625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-958000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-958000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-23 03:31:09.54327 -0700 PDT m=+724.365917251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-958000 -n force-systemd-env-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-958000 -n force-systemd-env-958000: exit status 7 (34.297417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-958000
--- FAIL: TestForceSystemdEnv (10.56s)

                                                
                                    
x
+
TestErrorSpam/setup (9.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-177000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-177000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 --driver=qemu2 : exit status 80 (9.907075459s)

                                                
                                                
-- stdout --
	* [nospam-177000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-177000" primary control-plane node in "nospam-177000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-177000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-177000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-177000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19689
- KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-177000" primary control-plane node in "nospam-177000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-177000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.91s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-824000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.953308583s)

                                                
                                                
-- stdout --
	* [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-824000" primary control-plane node in "functional-824000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-824000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-824000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19689
- KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-824000" primary control-plane node in "functional-824000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-824000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51071 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (68.973291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.02s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 03:20:09.794666    7121 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-824000 --alsologtostderr -v=8: exit status 80 (5.1844355s)

                                                
                                                
-- stdout --
	* [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-824000" primary control-plane node in "functional-824000" cluster
	* Restarting existing qemu2 VM for "functional-824000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-824000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:20:09.825474    7349 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:20:09.825608    7349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:20:09.825611    7349 out.go:358] Setting ErrFile to fd 2...
	I0923 03:20:09.825614    7349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:20:09.825753    7349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:20:09.826758    7349 out.go:352] Setting JSON to false
	I0923 03:20:09.842890    7349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4780,"bootTime":1727082029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:20:09.842959    7349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:20:09.847099    7349 out.go:177] * [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:20:09.854079    7349 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:20:09.854132    7349 notify.go:220] Checking for updates...
	I0923 03:20:09.862007    7349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:20:09.866085    7349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:20:09.869035    7349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:20:09.872055    7349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:20:09.875055    7349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:20:09.878184    7349 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:20:09.878234    7349 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:20:09.883031    7349 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:20:09.889969    7349 start.go:297] selected driver: qemu2
	I0923 03:20:09.889974    7349 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:20:09.890020    7349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:20:09.892303    7349 cni.go:84] Creating CNI manager for ""
	I0923 03:20:09.892351    7349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:20:09.892400    7349 start.go:340] cluster config:
	{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:20:09.895961    7349 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:20:09.903093    7349 out.go:177] * Starting "functional-824000" primary control-plane node in "functional-824000" cluster
	I0923 03:20:09.906988    7349 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:20:09.907007    7349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:20:09.907014    7349 cache.go:56] Caching tarball of preloaded images
	I0923 03:20:09.907073    7349 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:20:09.907079    7349 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:20:09.907128    7349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/functional-824000/config.json ...
	I0923 03:20:09.907603    7349 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:20:09.907634    7349 start.go:364] duration metric: took 23.959µs to acquireMachinesLock for "functional-824000"
	I0923 03:20:09.907644    7349 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:20:09.907648    7349 fix.go:54] fixHost starting: 
	I0923 03:20:09.907770    7349 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
	W0923 03:20:09.907779    7349 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:20:09.915964    7349 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
	I0923 03:20:09.920031    7349 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:20:09.920068    7349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
	I0923 03:20:09.922097    7349 main.go:141] libmachine: STDOUT: 
	I0923 03:20:09.922119    7349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:20:09.922149    7349 fix.go:56] duration metric: took 14.499375ms for fixHost
	I0923 03:20:09.922155    7349 start.go:83] releasing machines lock for "functional-824000", held for 14.517292ms
	W0923 03:20:09.922163    7349 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:20:09.922201    7349 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:20:09.922206    7349 start.go:729] Will try again in 5 seconds ...
	I0923 03:20:14.924282    7349 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:20:14.924683    7349 start.go:364] duration metric: took 303.5µs to acquireMachinesLock for "functional-824000"
	I0923 03:20:14.924797    7349 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:20:14.924819    7349 fix.go:54] fixHost starting: 
	I0923 03:20:14.925522    7349 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
	W0923 03:20:14.925547    7349 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:20:14.929964    7349 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
	I0923 03:20:14.937919    7349 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:20:14.938114    7349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
	I0923 03:20:14.947110    7349 main.go:141] libmachine: STDOUT: 
	I0923 03:20:14.947169    7349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:20:14.947237    7349 fix.go:56] duration metric: took 22.421875ms for fixHost
	I0923 03:20:14.947256    7349 start.go:83] releasing machines lock for "functional-824000", held for 22.551833ms
	W0923 03:20:14.947409    7349 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:20:14.951899    7349 out.go:201] 
	W0923 03:20:14.955899    7349 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:20:14.955959    7349 out.go:270] * 
	* 
	W0923 03:20:14.958817    7349 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:20:14.965673    7349 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-824000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.186132416s for "functional-824000" cluster.
I0923 03:20:14.980965    7121 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (67.278792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.1215ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-824000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.919458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-824000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-824000 get po -A: exit status 1 (25.99ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-824000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-824000\n"*: args "kubectl --context functional-824000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-824000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (31.155125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl images: exit status 83 (42.876833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.738375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-824000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.026041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.947125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-824000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 kubectl -- --context functional-824000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 kubectl -- --context functional-824000 get pods: exit status 1 (2.216904334s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-824000
	* no server found for cluster "functional-824000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-824000 kubectl -- --context functional-824000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (31.829667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.25s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-824000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-824000 get pods: exit status 1 (1.013043875s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-824000
	* no server found for cluster "functional-824000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-824000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.312208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-824000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.194544208s)

                                                
                                                
-- stdout --
	* [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-824000" primary control-plane node in "functional-824000" cluster
	* Restarting existing qemu2 VM for "functional-824000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-824000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-824000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.1951505s for "functional-824000" cluster.
I0923 03:20:26.873148    7121 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (69.095084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-824000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-824000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.041125ms)

                                                
                                                
** stderr ** 
	error: context "functional-824000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-824000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.769208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 logs: exit status 83 (73.613833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | -p download-only-437000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | -p download-only-406000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| start   | --download-only -p                                                       | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | binary-mirror-429000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51036                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-429000                                                  | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| addons  | enable dashboard -p                                                      | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | addons-858000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | addons-858000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-858000 --wait=true                                             | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-858000                                                         | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| start   | -p nospam-177000 -n=1 --memory=2250 --wait=false                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-177000                                                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
	| cache   | functional-824000 cache delete                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	| ssh     | functional-824000 ssh sudo                                               | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-824000                                                        | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-824000 cache reload                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-824000 kubectl --                                             | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | --context functional-824000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 03:20:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 03:20:21.704104    7426 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:20:21.704211    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:20:21.704213    7426 out.go:358] Setting ErrFile to fd 2...
	I0923 03:20:21.704215    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:20:21.704327    7426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:20:21.705341    7426 out.go:352] Setting JSON to false
	I0923 03:20:21.721151    7426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4792,"bootTime":1727082029,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:20:21.721210    7426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:20:21.726628    7426 out.go:177] * [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:20:21.734705    7426 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:20:21.734757    7426 notify.go:220] Checking for updates...
	I0923 03:20:21.749668    7426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:20:21.753687    7426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:20:21.756641    7426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:20:21.759622    7426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:20:21.762554    7426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:20:21.765999    7426 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:20:21.766048    7426 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:20:21.770629    7426 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:20:21.777618    7426 start.go:297] selected driver: qemu2
	I0923 03:20:21.777621    7426 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:20:21.777666    7426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:20:21.779946    7426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:20:21.779969    7426 cni.go:84] Creating CNI manager for ""
	I0923 03:20:21.779993    7426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:20:21.780032    7426 start.go:340] cluster config:
	{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:20:21.783608    7426 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:20:21.792613    7426 out.go:177] * Starting "functional-824000" primary control-plane node in "functional-824000" cluster
	I0923 03:20:21.796631    7426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:20:21.796644    7426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:20:21.796656    7426 cache.go:56] Caching tarball of preloaded images
	I0923 03:20:21.796722    7426 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:20:21.796726    7426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:20:21.796775    7426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/functional-824000/config.json ...
	I0923 03:20:21.797199    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:20:21.797234    7426 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "functional-824000"
	I0923 03:20:21.797246    7426 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:20:21.797248    7426 fix.go:54] fixHost starting: 
	I0923 03:20:21.797366    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
	W0923 03:20:21.797373    7426 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:20:21.804622    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
	I0923 03:20:21.808520    7426 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:20:21.808560    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
	I0923 03:20:21.810547    7426 main.go:141] libmachine: STDOUT: 
	I0923 03:20:21.810562    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:20:21.810594    7426 fix.go:56] duration metric: took 13.34325ms for fixHost
	I0923 03:20:21.810598    7426 start.go:83] releasing machines lock for "functional-824000", held for 13.36175ms
	W0923 03:20:21.810604    7426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:20:21.810638    7426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:20:21.810643    7426 start.go:729] Will try again in 5 seconds ...
	I0923 03:20:26.812751    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:20:26.813190    7426 start.go:364] duration metric: took 357.875µs to acquireMachinesLock for "functional-824000"
	I0923 03:20:26.813332    7426 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:20:26.813347    7426 fix.go:54] fixHost starting: 
	I0923 03:20:26.814106    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
	W0923 03:20:26.814127    7426 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:20:26.822532    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
	I0923 03:20:26.825661    7426 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:20:26.826009    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
	I0923 03:20:26.835678    7426 main.go:141] libmachine: STDOUT: 
	I0923 03:20:26.835724    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:20:26.835808    7426 fix.go:56] duration metric: took 22.463459ms for fixHost
	I0923 03:20:26.835827    7426 start.go:83] releasing machines lock for "functional-824000", held for 22.603333ms
	W0923 03:20:26.836004    7426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:20:26.843663    7426 out.go:201] 
	W0923 03:20:26.847686    7426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:20:26.847711    7426 out.go:270] * 
	W0923 03:20:26.850311    7426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:20:26.858531    7426 out.go:201] 
	
	
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-824000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | -p download-only-437000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | -p download-only-406000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | binary-mirror-429000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51036                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-429000                                                  | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | addons-858000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | addons-858000                                                            |                      |         |         |                     |                     |
| start   | -p addons-858000 --wait=true                                             | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-858000                                                         | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -p nospam-177000 -n=1 --memory=2250 --wait=false                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-177000                                                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
| cache   | functional-824000 cache delete                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
| ssh     | functional-824000 ssh sudo                                               | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-824000                                                        | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-824000 cache reload                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-824000 kubectl --                                             | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --context functional-824000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/23 03:20:21
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 03:20:21.704104    7426 out.go:345] Setting OutFile to fd 1 ...
I0923 03:20:21.704211    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:21.704213    7426 out.go:358] Setting ErrFile to fd 2...
I0923 03:20:21.704215    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:21.704327    7426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:20:21.705341    7426 out.go:352] Setting JSON to false
I0923 03:20:21.721151    7426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4792,"bootTime":1727082029,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0923 03:20:21.721210    7426 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0923 03:20:21.726628    7426 out.go:177] * [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0923 03:20:21.734705    7426 out.go:177]   - MINIKUBE_LOCATION=19689
I0923 03:20:21.734757    7426 notify.go:220] Checking for updates...
I0923 03:20:21.749668    7426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
I0923 03:20:21.753687    7426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0923 03:20:21.756641    7426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 03:20:21.759622    7426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
I0923 03:20:21.762554    7426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0923 03:20:21.765999    7426 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:20:21.766048    7426 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 03:20:21.770629    7426 out.go:177] * Using the qemu2 driver based on existing profile
I0923 03:20:21.777618    7426 start.go:297] selected driver: qemu2
I0923 03:20:21.777621    7426 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 03:20:21.777666    7426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 03:20:21.779946    7426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 03:20:21.779969    7426 cni.go:84] Creating CNI manager for ""
I0923 03:20:21.779993    7426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 03:20:21.780032    7426 start.go:340] cluster config:
{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 03:20:21.783608    7426 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 03:20:21.792613    7426 out.go:177] * Starting "functional-824000" primary control-plane node in "functional-824000" cluster
I0923 03:20:21.796631    7426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 03:20:21.796644    7426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 03:20:21.796656    7426 cache.go:56] Caching tarball of preloaded images
I0923 03:20:21.796722    7426 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 03:20:21.796726    7426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 03:20:21.796775    7426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/functional-824000/config.json ...
I0923 03:20:21.797199    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 03:20:21.797234    7426 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "functional-824000"
I0923 03:20:21.797246    7426 start.go:96] Skipping create...Using existing machine configuration
I0923 03:20:21.797248    7426 fix.go:54] fixHost starting: 
I0923 03:20:21.797366    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
W0923 03:20:21.797373    7426 fix.go:138] unexpected machine state, will restart: <nil>
I0923 03:20:21.804622    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
I0923 03:20:21.808520    7426 qemu.go:418] Using hvf for hardware acceleration
I0923 03:20:21.808560    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
I0923 03:20:21.810547    7426 main.go:141] libmachine: STDOUT: 
I0923 03:20:21.810562    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 03:20:21.810594    7426 fix.go:56] duration metric: took 13.34325ms for fixHost
I0923 03:20:21.810598    7426 start.go:83] releasing machines lock for "functional-824000", held for 13.36175ms
W0923 03:20:21.810604    7426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 03:20:21.810638    7426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 03:20:21.810643    7426 start.go:729] Will try again in 5 seconds ...
I0923 03:20:26.812751    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 03:20:26.813190    7426 start.go:364] duration metric: took 357.875µs to acquireMachinesLock for "functional-824000"
I0923 03:20:26.813332    7426 start.go:96] Skipping create...Using existing machine configuration
I0923 03:20:26.813347    7426 fix.go:54] fixHost starting: 
I0923 03:20:26.814106    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
W0923 03:20:26.814127    7426 fix.go:138] unexpected machine state, will restart: <nil>
I0923 03:20:26.822532    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
I0923 03:20:26.825661    7426 qemu.go:418] Using hvf for hardware acceleration
I0923 03:20:26.826009    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
I0923 03:20:26.835678    7426 main.go:141] libmachine: STDOUT: 
I0923 03:20:26.835724    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 03:20:26.835808    7426 fix.go:56] duration metric: took 22.463459ms for fixHost
I0923 03:20:26.835827    7426 start.go:83] releasing machines lock for "functional-824000", held for 22.603333ms
W0923 03:20:26.836004    7426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 03:20:26.843663    7426 out.go:201] 
W0923 03:20:26.847686    7426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 03:20:26.847711    7426 out.go:270] * 
W0923 03:20:26.850311    7426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 03:20:26.858531    7426 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3800512702/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | -p download-only-437000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | -p download-only-406000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-437000                                                  | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| delete  | -p download-only-406000                                                  | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | binary-mirror-429000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51036                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-429000                                                  | binary-mirror-429000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| addons  | enable dashboard -p                                                      | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | addons-858000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | addons-858000                                                            |                      |         |         |                     |                     |
| start   | -p addons-858000 --wait=true                                             | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-858000                                                         | addons-858000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -p nospam-177000 -n=1 --memory=2250 --wait=false                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-177000 --log_dir                                                  | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-177000                                                         | nospam-177000        | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-824000 cache add                                              | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
| cache   | functional-824000 cache delete                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | minikube-local-cache-test:functional-824000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
| ssh     | functional-824000 ssh sudo                                               | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-824000                                                        | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-824000 cache reload                                           | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
| ssh     | functional-824000 ssh                                                    | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT | 23 Sep 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-824000 kubectl --                                             | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --context functional-824000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-824000                                                     | functional-824000    | jenkins | v1.34.0 | 23 Sep 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/23 03:20:21
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 03:20:21.704104    7426 out.go:345] Setting OutFile to fd 1 ...
I0923 03:20:21.704211    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:21.704213    7426 out.go:358] Setting ErrFile to fd 2...
I0923 03:20:21.704215    7426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:21.704327    7426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:20:21.705341    7426 out.go:352] Setting JSON to false
I0923 03:20:21.721151    7426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4792,"bootTime":1727082029,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0923 03:20:21.721210    7426 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0923 03:20:21.726628    7426 out.go:177] * [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0923 03:20:21.734705    7426 out.go:177]   - MINIKUBE_LOCATION=19689
I0923 03:20:21.734757    7426 notify.go:220] Checking for updates...
I0923 03:20:21.749668    7426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
I0923 03:20:21.753687    7426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0923 03:20:21.756641    7426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 03:20:21.759622    7426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
I0923 03:20:21.762554    7426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0923 03:20:21.765999    7426 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:20:21.766048    7426 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 03:20:21.770629    7426 out.go:177] * Using the qemu2 driver based on existing profile
I0923 03:20:21.777618    7426 start.go:297] selected driver: qemu2
I0923 03:20:21.777621    7426 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 03:20:21.777666    7426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 03:20:21.779946    7426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 03:20:21.779969    7426 cni.go:84] Creating CNI manager for ""
I0923 03:20:21.779993    7426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 03:20:21.780032    7426 start.go:340] cluster config:
{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 03:20:21.783608    7426 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 03:20:21.792613    7426 out.go:177] * Starting "functional-824000" primary control-plane node in "functional-824000" cluster
I0923 03:20:21.796631    7426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 03:20:21.796644    7426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 03:20:21.796656    7426 cache.go:56] Caching tarball of preloaded images
I0923 03:20:21.796722    7426 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 03:20:21.796726    7426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 03:20:21.796775    7426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/functional-824000/config.json ...
I0923 03:20:21.797199    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 03:20:21.797234    7426 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "functional-824000"
I0923 03:20:21.797246    7426 start.go:96] Skipping create...Using existing machine configuration
I0923 03:20:21.797248    7426 fix.go:54] fixHost starting: 
I0923 03:20:21.797366    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
W0923 03:20:21.797373    7426 fix.go:138] unexpected machine state, will restart: <nil>
I0923 03:20:21.804622    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
I0923 03:20:21.808520    7426 qemu.go:418] Using hvf for hardware acceleration
I0923 03:20:21.808560    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
I0923 03:20:21.810547    7426 main.go:141] libmachine: STDOUT: 
I0923 03:20:21.810562    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 03:20:21.810594    7426 fix.go:56] duration metric: took 13.34325ms for fixHost
I0923 03:20:21.810598    7426 start.go:83] releasing machines lock for "functional-824000", held for 13.36175ms
W0923 03:20:21.810604    7426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 03:20:21.810638    7426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 03:20:21.810643    7426 start.go:729] Will try again in 5 seconds ...
I0923 03:20:26.812751    7426 start.go:360] acquireMachinesLock for functional-824000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 03:20:26.813190    7426 start.go:364] duration metric: took 357.875µs to acquireMachinesLock for "functional-824000"
I0923 03:20:26.813332    7426 start.go:96] Skipping create...Using existing machine configuration
I0923 03:20:26.813347    7426 fix.go:54] fixHost starting: 
I0923 03:20:26.814106    7426 fix.go:112] recreateIfNeeded on functional-824000: state=Stopped err=<nil>
W0923 03:20:26.814127    7426 fix.go:138] unexpected machine state, will restart: <nil>
I0923 03:20:26.822532    7426 out.go:177] * Restarting existing qemu2 VM for "functional-824000" ...
I0923 03:20:26.825661    7426 qemu.go:418] Using hvf for hardware acceleration
I0923 03:20:26.826009    7426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5d:a0:a3:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/functional-824000/disk.qcow2
I0923 03:20:26.835678    7426 main.go:141] libmachine: STDOUT: 
I0923 03:20:26.835724    7426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 03:20:26.835808    7426 fix.go:56] duration metric: took 22.463459ms for fixHost
I0923 03:20:26.835827    7426 start.go:83] releasing machines lock for "functional-824000", held for 22.603333ms
W0923 03:20:26.836004    7426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-824000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 03:20:26.843663    7426 out.go:201] 
W0923 03:20:26.847686    7426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 03:20:26.847711    7426 out.go:270] * 
W0923 03:20:26.850311    7426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 03:20:26.858531    7426 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-824000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-824000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.229834ms)

                                                
                                                
** stderr ** 
	error: context "functional-824000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-824000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-824000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-824000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-824000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-824000 --alsologtostderr -v=1] stderr:
I0923 03:21:13.521544    7734 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:13.521957    7734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.521961    7734 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:13.521964    7734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.522143    7734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:13.522361    7734 mustload.go:65] Loading cluster: functional-824000
I0923 03:21:13.522597    7734 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:13.526146    7734 out.go:177] * The control-plane node functional-824000 host is not running: state=Stopped
I0923 03:21:13.530255    7734 out.go:177]   To start a cluster, run: "minikube start -p functional-824000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (43.3245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 status: exit status 7 (29.990833ms)

                                                
                                                
-- stdout --
	functional-824000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-824000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.41175ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-824000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 status -o json: exit status 7 (29.947541ms)

                                                
                                                
-- stdout --
	{"Name":"functional-824000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-824000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (29.599708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-824000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-824000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.511541ms)

                                                
                                                
** stderr ** 
	error: context "functional-824000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-824000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-824000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-824000 describe po hello-node-connect: exit status 1 (25.93ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-824000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-824000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-824000 logs -l app=hello-node-connect: exit status 1 (25.892458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-824000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-824000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-824000 describe svc hello-node-connect: exit status 1 (26.035917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-824000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.080833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-824000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.816333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "echo hello": exit status 83 (44.578583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n"*. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "cat /etc/hostname": exit status 83 (42.869625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-824000"- but got *"* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n"*. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.46675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.045667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-824000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-824000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cp functional-824000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4270216668/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 cp functional-824000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4270216668/001/cp-test.txt: exit status 83 (38.515209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 cp functional-824000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4270216668/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.015416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4270216668/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (41.885958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.6825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-824000 ssh -n functional-824000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-824000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-824000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7121/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/test/nested/copy/7121/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/test/nested/copy/7121/hosts": exit status 83 (39.884291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/test/nested/copy/7121/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-824000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-824000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (30.195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/7121.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/7121.pem": exit status 83 (42.789209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7121.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /etc/ssl/certs/7121.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7121.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /usr/share/ca-certificates/7121.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /usr/share/ca-certificates/7121.pem": exit status 83 (45.766458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7121.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /usr/share/ca-certificates/7121.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7121.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (53.528375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/71212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/71212.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/71212.pem": exit status 83 (40.839083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/71212.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /etc/ssl/certs/71212.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/71212.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/71212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /usr/share/ca-certificates/71212.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /usr/share/ca-certificates/71212.pem": exit status 83 (39.705833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/71212.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /usr/share/ca-certificates/71212.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/71212.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.753792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-824000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-824000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (35.510208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-824000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-824000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.756833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-824000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-824000 -n functional-824000: exit status 7 (32.029792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-824000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo systemctl is-active crio": exit status 83 (45.551166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 version -o=json --components: exit status 83 (40.80725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-824000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-824000 image ls --format short --alsologtostderr:
I0923 03:21:13.928086    7749 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:13.928232    7749 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.928236    7749 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:13.928238    7749 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.928396    7749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:13.928848    7749 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:13.928910    7749 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-824000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-824000 image ls --format table --alsologtostderr:
I0923 03:21:14.036594    7755 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:14.036775    7755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.036778    7755 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:14.036781    7755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.037119    7755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:14.037851    7755 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:14.037918    7755 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-824000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-824000 image ls --format json --alsologtostderr:
I0923 03:21:14.000800    7753 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:14.000969    7753 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.000973    7753 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:14.000975    7753 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.001134    7753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:14.001552    7753 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:14.001618    7753 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-824000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-824000 image ls --format yaml --alsologtostderr:
I0923 03:21:13.965220    7751 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:13.965365    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.965369    7751 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:13.965371    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:13.965506    7751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:13.965957    7751 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:13.966020    7751 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh pgrep buildkitd: exit status 83 (41.728542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image build -t localhost/my-image:functional-824000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-824000 image build -t localhost/my-image:functional-824000 testdata/build --alsologtostderr:
I0923 03:21:14.115976    7759 out.go:345] Setting OutFile to fd 1 ...
I0923 03:21:14.116401    7759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.116405    7759 out.go:358] Setting ErrFile to fd 2...
I0923 03:21:14.116409    7759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:21:14.116564    7759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:21:14.116987    7759 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:14.117454    7759 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:21:14.117684    7759 build_images.go:133] succeeded building to: 
I0923 03:21:14.117688    7759 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
functional_test.go:446: expected "localhost/my-image:functional-824000" to be loaded into minikube but the image is not there
I0923 03:21:38.485948    7121 retry.go:31] will retry after 35.777475527s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-824000 docker-env) && out/minikube-darwin-arm64 status -p functional-824000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-824000 docker-env) && out/minikube-darwin-arm64 status -p functional-824000": exit status 1 (45.798709ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2: exit status 83 (42.766625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:21:13.798935    7743 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:21:13.799541    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.799545    7743 out.go:358] Setting ErrFile to fd 2...
	I0923 03:21:13.799547    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.799718    7743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:21:13.799930    7743 mustload.go:65] Loading cluster: functional-824000
	I0923 03:21:13.800122    7743 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:21:13.804220    7743 out.go:177] * The control-plane node functional-824000 host is not running: state=Stopped
	I0923 03:21:13.808202    7743 out.go:177]   To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2: exit status 83 (42.255084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:21:13.885285    7747 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:21:13.885433    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.885436    7747 out.go:358] Setting ErrFile to fd 2...
	I0923 03:21:13.885439    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.885573    7747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:21:13.885825    7747 mustload.go:65] Loading cluster: functional-824000
	I0923 03:21:13.886036    7747 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:21:13.890204    7747 out.go:177] * The control-plane node functional-824000 host is not running: state=Stopped
	I0923 03:21:13.894211    7747 out.go:177]   To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2: exit status 83 (42.811916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:21:13.842298    7745 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:21:13.842428    7745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.842432    7745 out.go:358] Setting ErrFile to fd 2...
	I0923 03:21:13.842435    7745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.842579    7745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:21:13.842833    7745 mustload.go:65] Loading cluster: functional-824000
	I0923 03:21:13.843053    7745 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:21:13.847239    7745 out.go:177] * The control-plane node functional-824000 host is not running: state=Stopped
	I0923 03:21:13.851124    7745 out.go:177]   To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-824000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-824000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-824000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.060583ms)

                                                
                                                
** stderr ** 
	error: context "functional-824000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-824000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 service list: exit status 83 (43.846083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-824000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 service list -o json: exit status 83 (42.7215ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-824000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 service --namespace=default --https --url hello-node: exit status 83 (45.884583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-824000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 service hello-node --url --format={{.IP}}: exit status 83 (43.840292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-824000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 service hello-node --url: exit status 83 (42.842125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-824000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test.go:1569: failed to parse "* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"": parse "* The control-plane node functional-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-824000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0923 03:20:28.655600    7544 out.go:345] Setting OutFile to fd 1 ...
I0923 03:20:28.655781    7544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:28.655784    7544 out.go:358] Setting ErrFile to fd 2...
I0923 03:20:28.655787    7544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:20:28.655928    7544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:20:28.656162    7544 mustload.go:65] Loading cluster: functional-824000
I0923 03:20:28.656386    7544 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:20:28.662014    7544 out.go:177] * The control-plane node functional-824000 host is not running: state=Stopped
I0923 03:20:28.672930    7544 out.go:177]   To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
stdout: * The control-plane node functional-824000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-824000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7545: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-824000": client config: context "functional-824000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (105.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0923 03:20:28.726334    7121 retry.go:31] will retry after 3.598806194s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-824000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-824000 get svc nginx-svc: exit status 1 (70.24625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-824000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-824000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (105.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image load --daemon kicbase/echo-server:functional-824000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-824000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image load --daemon kicbase/echo-server:functional-824000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-824000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-824000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image load --daemon kicbase/echo-server:functional-824000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-824000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image save kicbase/echo-server:functional-824000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-824000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0923 03:22:14.350039    7121 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.02543325s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0923 03:22:39.473464    7121 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:22:49.475537    7121 retry.go:31] will retry after 2.666942678s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0923 03:23:02.146779    7121 retry.go:31] will retry after 5.49499069s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:60873->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-301000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-301000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.821747417s)

                                                
                                                
-- stdout --
	* [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:23:09.865989    7799 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:23:09.866104    7799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:23:09.866108    7799 out.go:358] Setting ErrFile to fd 2...
	I0923 03:23:09.866110    7799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:23:09.866271    7799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:23:09.867346    7799 out.go:352] Setting JSON to false
	I0923 03:23:09.883521    7799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4960,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:23:09.883596    7799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:23:09.888374    7799 out.go:177] * [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:23:09.892341    7799 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:23:09.892364    7799 notify.go:220] Checking for updates...
	I0923 03:23:09.898337    7799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:23:09.901344    7799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:23:09.904414    7799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:23:09.908343    7799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:23:09.911343    7799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:23:09.914463    7799 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:23:09.918231    7799 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:23:09.925370    7799 start.go:297] selected driver: qemu2
	I0923 03:23:09.925377    7799 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:23:09.925385    7799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:23:09.927699    7799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:23:09.931305    7799 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:23:09.934424    7799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:23:09.934440    7799 cni.go:84] Creating CNI manager for ""
	I0923 03:23:09.934463    7799 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 03:23:09.934468    7799 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 03:23:09.934498    7799 start.go:340] cluster config:
	{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:23:09.938444    7799 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:23:09.945324    7799 out.go:177] * Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	I0923 03:23:09.949295    7799 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:23:09.949320    7799 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:23:09.949330    7799 cache.go:56] Caching tarball of preloaded images
	I0923 03:23:09.949388    7799 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:23:09.949393    7799 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:23:09.949592    7799 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/ha-301000/config.json ...
	I0923 03:23:09.949604    7799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/ha-301000/config.json: {Name:mk1446e9944d87d0258163d207b021419f6c2303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:23:09.949809    7799 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:23:09.949846    7799 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "ha-301000"
	I0923 03:23:09.949859    7799 start.go:93] Provisioning new machine with config: &{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:23:09.949888    7799 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:23:09.958326    7799 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:23:09.975995    7799 start.go:159] libmachine.API.Create for "ha-301000" (driver="qemu2")
	I0923 03:23:09.976022    7799 client.go:168] LocalClient.Create starting
	I0923 03:23:09.976089    7799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:23:09.976122    7799 main.go:141] libmachine: Decoding PEM data...
	I0923 03:23:09.976132    7799 main.go:141] libmachine: Parsing certificate...
	I0923 03:23:09.976167    7799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:23:09.976192    7799 main.go:141] libmachine: Decoding PEM data...
	I0923 03:23:09.976201    7799 main.go:141] libmachine: Parsing certificate...
	I0923 03:23:09.976566    7799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:23:10.153469    7799 main.go:141] libmachine: Creating SSH key...
	I0923 03:23:10.183546    7799 main.go:141] libmachine: Creating Disk image...
	I0923 03:23:10.183552    7799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:23:10.183758    7799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:10.192813    7799 main.go:141] libmachine: STDOUT: 
	I0923 03:23:10.192828    7799 main.go:141] libmachine: STDERR: 
	I0923 03:23:10.192889    7799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2 +20000M
	I0923 03:23:10.200734    7799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:23:10.200750    7799 main.go:141] libmachine: STDERR: 
	I0923 03:23:10.200780    7799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:10.200785    7799 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:23:10.200797    7799 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:23:10.200824    7799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:19:67:ce:ae:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:10.202459    7799 main.go:141] libmachine: STDOUT: 
	I0923 03:23:10.202472    7799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:23:10.202490    7799 client.go:171] duration metric: took 226.467208ms to LocalClient.Create
	I0923 03:23:12.204630    7799 start.go:128] duration metric: took 2.254771042s to createHost
	I0923 03:23:12.204691    7799 start.go:83] releasing machines lock for "ha-301000", held for 2.25487575s
	W0923 03:23:12.204828    7799 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:23:12.218958    7799 out.go:177] * Deleting "ha-301000" in qemu2 ...
	W0923 03:23:12.253426    7799 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:23:12.253454    7799 start.go:729] Will try again in 5 seconds ...
	I0923 03:23:17.255547    7799 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:23:17.255925    7799 start.go:364] duration metric: took 304µs to acquireMachinesLock for "ha-301000"
	I0923 03:23:17.256047    7799 start.go:93] Provisioning new machine with config: &{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:23:17.256356    7799 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:23:17.263007    7799 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:23:17.312722    7799 start.go:159] libmachine.API.Create for "ha-301000" (driver="qemu2")
	I0923 03:23:17.312767    7799 client.go:168] LocalClient.Create starting
	I0923 03:23:17.312886    7799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:23:17.312943    7799 main.go:141] libmachine: Decoding PEM data...
	I0923 03:23:17.312963    7799 main.go:141] libmachine: Parsing certificate...
	I0923 03:23:17.313038    7799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:23:17.313081    7799 main.go:141] libmachine: Decoding PEM data...
	I0923 03:23:17.313095    7799 main.go:141] libmachine: Parsing certificate...
	I0923 03:23:17.313869    7799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:23:17.497354    7799 main.go:141] libmachine: Creating SSH key...
	I0923 03:23:17.595124    7799 main.go:141] libmachine: Creating Disk image...
	I0923 03:23:17.595131    7799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:23:17.595332    7799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:17.604610    7799 main.go:141] libmachine: STDOUT: 
	I0923 03:23:17.604627    7799 main.go:141] libmachine: STDERR: 
	I0923 03:23:17.604685    7799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2 +20000M
	I0923 03:23:17.612711    7799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:23:17.612724    7799 main.go:141] libmachine: STDERR: 
	I0923 03:23:17.612735    7799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:17.612739    7799 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:23:17.612746    7799 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:23:17.612768    7799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:1b:24:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:23:17.614363    7799 main.go:141] libmachine: STDOUT: 
	I0923 03:23:17.614376    7799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:23:17.614388    7799 client.go:171] duration metric: took 301.623292ms to LocalClient.Create
	I0923 03:23:19.616512    7799 start.go:128] duration metric: took 2.360153708s to createHost
	I0923 03:23:19.616570    7799 start.go:83] releasing machines lock for "ha-301000", held for 2.360675s
	W0923 03:23:19.616921    7799 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:23:19.627535    7799 out.go:201] 
	W0923 03:23:19.631649    7799 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:23:19.631673    7799 out.go:270] * 
	* 
	W0923 03:23:19.634503    7799 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:23:19.645552    7799 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-301000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (67.137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (100.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.850333ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-301000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- rollout status deployment/busybox: exit status 1 (57.695542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.901875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:19.906526    7121 retry.go:31] will retry after 1.171757157s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.411834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:21.185089    7121 retry.go:31] will retry after 1.324739846s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.640667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:22.616929    7121 retry.go:31] will retry after 3.258997024s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.842042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:25.983104    7121 retry.go:31] will retry after 3.883763393s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.606042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:29.973807    7121 retry.go:31] will retry after 6.927382421s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.409209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:37.009916    7121 retry.go:31] will retry after 7.107393343s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.255084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:44.224791    7121 retry.go:31] will retry after 8.757854961s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.146167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:23:53.090132    7121 retry.go:31] will retry after 11.793609335s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.154167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:24:04.993082    7121 retry.go:31] will retry after 28.749921961s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.155458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:24:33.849064    7121 retry.go:31] will retry after 26.297601239s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.599958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.794125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.40325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.698375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.014167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.768541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (100.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.326125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-301000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.623916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-301000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-301000 -v=7 --alsologtostderr: exit status 83 (43.11825ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:00.634053    7881 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:00.634641    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.634645    7881 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:00.634647    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.634840    7881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:00.635075    7881 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:00.635315    7881 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:00.640354    7881 out.go:177] * The control-plane node ha-301000 host is not running: state=Stopped
	I0923 03:25:00.644187    7881 out.go:177]   To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-301000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.501417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.584458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-301000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.825083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-301000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-301000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (29.865416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status --output json -v=7 --alsologtostderr: exit status 7 (29.899125ms)

                                                
                                                
-- stdout --
	{"Name":"ha-301000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:00.842527    7893 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:00.842680    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.842683    7893 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:00.842686    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.842825    7893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:00.842955    7893 out.go:352] Setting JSON to true
	I0923 03:25:00.842969    7893 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:00.843042    7893 notify.go:220] Checking for updates...
	I0923 03:25:00.843175    7893 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:00.843184    7893 status.go:174] checking status of ha-301000 ...
	I0923 03:25:00.843420    7893 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:00.843424    7893 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:00.843426    7893 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-301000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.921417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 node stop m02 -v=7 --alsologtostderr: exit status 85 (45.042458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:00.904827    7897 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:00.905440    7897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.905443    7897 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:00.905446    7897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.905615    7897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:00.905869    7897 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:00.906076    7897 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:00.910314    7897 out.go:201] 
	W0923 03:25:00.913245    7897 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0923 03:25:00.913250    7897 out.go:270] * 
	* 
	W0923 03:25:00.915184    7897 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:25:00.918292    7897 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-301000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (29.823958ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:00.949028    7899 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:00.949232    7899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.949235    7899 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:00.949237    7899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:00.949368    7899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:00.949495    7899 out.go:352] Setting JSON to false
	I0923 03:25:00.949505    7899 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:00.949571    7899 notify.go:220] Checking for updates...
	I0923 03:25:00.949711    7899 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:00.949720    7899 status.go:174] checking status of ha-301000 ...
	I0923 03:25:00.949985    7899 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:00.949989    7899 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:00.949991    7899 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.821625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-301000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.331375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.553291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:01.089230    7908 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:01.089862    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.089866    7908 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:01.089868    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.090043    7908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:01.090331    7908 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:01.090532    7908 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:01.094376    7908 out.go:201] 
	W0923 03:25:01.098317    7908 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0923 03:25:01.098323    7908 out.go:270] * 
	* 
	W0923 03:25:01.100369    7908 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:25:01.103277    7908 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0923 03:25:01.089230    7908 out.go:345] Setting OutFile to fd 1 ...
I0923 03:25:01.089862    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:25:01.089866    7908 out.go:358] Setting ErrFile to fd 2...
I0923 03:25:01.089868    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:25:01.090043    7908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:25:01.090331    7908 mustload.go:65] Loading cluster: ha-301000
I0923 03:25:01.090532    7908 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:25:01.094376    7908 out.go:201] 
W0923 03:25:01.098317    7908 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0923 03:25:01.098323    7908 out.go:270] * 
* 
W0923 03:25:01.100369    7908 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 03:25:01.103277    7908 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-301000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (31.056292ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:01.137197    7910 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:01.137332    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.137335    7910 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:01.137337    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.137479    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:01.137593    7910 out.go:352] Setting JSON to false
	I0923 03:25:01.137603    7910 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:01.137655    7910 notify.go:220] Checking for updates...
	I0923 03:25:01.137802    7910 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:01.137814    7910 status.go:174] checking status of ha-301000 ...
	I0923 03:25:01.138047    7910 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:01.138051    7910 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:01.138053    7910 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:01.138957    7121 retry.go:31] will retry after 544.245719ms: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (69.732292ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:01.752718    7912 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:01.752941    7912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.752946    7912 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:01.752949    7912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:01.753182    7912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:01.753365    7912 out.go:352] Setting JSON to false
	I0923 03:25:01.753380    7912 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:01.753447    7912 notify.go:220] Checking for updates...
	I0923 03:25:01.753682    7912 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:01.753694    7912 status.go:174] checking status of ha-301000 ...
	I0923 03:25:01.754060    7912 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:01.754066    7912 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:01.754069    7912 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:01.755300    7121 retry.go:31] will retry after 1.260243328s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (69.619834ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:03.085243    7914 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:03.085422    7914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:03.085427    7914 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:03.085431    7914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:03.085605    7914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:03.085757    7914 out.go:352] Setting JSON to false
	I0923 03:25:03.085771    7914 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:03.085813    7914 notify.go:220] Checking for updates...
	I0923 03:25:03.086050    7914 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:03.086061    7914 status.go:174] checking status of ha-301000 ...
	I0923 03:25:03.086409    7914 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:03.086414    7914 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:03.086417    7914 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:03.087536    7121 retry.go:31] will retry after 1.636724063s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (74.002833ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:04.798407    7916 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:04.798579    7916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:04.798583    7916 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:04.798586    7916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:04.798774    7916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:04.798949    7916 out.go:352] Setting JSON to false
	I0923 03:25:04.798961    7916 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:04.798999    7916 notify.go:220] Checking for updates...
	I0923 03:25:04.799221    7916 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:04.799231    7916 status.go:174] checking status of ha-301000 ...
	I0923 03:25:04.799565    7916 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:04.799569    7916 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:04.799572    7916 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:04.800606    7121 retry.go:31] will retry after 2.040694233s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (75.133916ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:06.916645    7921 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:06.916826    7921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:06.916830    7921 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:06.916833    7921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:06.917019    7921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:06.917188    7921 out.go:352] Setting JSON to false
	I0923 03:25:06.917213    7921 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:06.917248    7921 notify.go:220] Checking for updates...
	I0923 03:25:06.917453    7921 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:06.917465    7921 status.go:174] checking status of ha-301000 ...
	I0923 03:25:06.917788    7921 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:06.917793    7921 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:06.917795    7921 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:06.918815    7121 retry.go:31] will retry after 6.659228593s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (74.266041ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:13.652346    7923 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:13.652589    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:13.652594    7923 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:13.652597    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:13.652778    7923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:13.652932    7923 out.go:352] Setting JSON to false
	I0923 03:25:13.652945    7923 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:13.652985    7923 notify.go:220] Checking for updates...
	I0923 03:25:13.653206    7923 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:13.653217    7923 status.go:174] checking status of ha-301000 ...
	I0923 03:25:13.653526    7923 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:13.653530    7923 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:13.653533    7923 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:13.654568    7121 retry.go:31] will retry after 9.112954224s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (75.070875ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:22.842640    7925 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:22.842837    7925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:22.842841    7925 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:22.842844    7925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:22.842997    7925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:22.843162    7925 out.go:352] Setting JSON to false
	I0923 03:25:22.843175    7925 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:22.843217    7925 notify.go:220] Checking for updates...
	I0923 03:25:22.843429    7925 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:22.843440    7925 status.go:174] checking status of ha-301000 ...
	I0923 03:25:22.843738    7925 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:22.843743    7925 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:22.843746    7925 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:22.844865    7121 retry.go:31] will retry after 8.167578678s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (72.806375ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:31.085387    7927 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:31.085577    7927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:31.085582    7927 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:31.085585    7927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:31.085745    7927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:31.085903    7927 out.go:352] Setting JSON to false
	I0923 03:25:31.085916    7927 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:31.085951    7927 notify.go:220] Checking for updates...
	I0923 03:25:31.086182    7927 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:31.086192    7927 status.go:174] checking status of ha-301000 ...
	I0923 03:25:31.086496    7927 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:31.086501    7927 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:31.086504    7927 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:25:31.087595    7121 retry.go:31] will retry after 8.671259674s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (73.336125ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:39.833159    7929 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:39.833379    7929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:39.833383    7929 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:39.833387    7929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:39.833559    7929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:39.833760    7929 out.go:352] Setting JSON to false
	I0923 03:25:39.833774    7929 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:39.833809    7929 notify.go:220] Checking for updates...
	I0923 03:25:39.834068    7929 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:39.834080    7929 status.go:174] checking status of ha-301000 ...
	I0923 03:25:39.834447    7929 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:39.834452    7929 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:39.834454    7929 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (33.954542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (38.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-301000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-301000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.570125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-301000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-301000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-301000 -v=7 --alsologtostderr: (2.089867583s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-301000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-301000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.217339584s)

                                                
                                                
-- stdout --
	* [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	* Restarting existing qemu2 VM for "ha-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:42.135514    7952 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:42.135680    7952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:42.135684    7952 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:42.135688    7952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:42.135879    7952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:42.137130    7952 out.go:352] Setting JSON to false
	I0923 03:25:42.156449    7952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5113,"bootTime":1727082029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:25:42.156526    7952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:25:42.161209    7952 out.go:177] * [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:25:42.168165    7952 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:25:42.168213    7952 notify.go:220] Checking for updates...
	I0923 03:25:42.175017    7952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:25:42.178090    7952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:25:42.181119    7952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:25:42.184052    7952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:25:42.187107    7952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:25:42.190428    7952 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:42.190488    7952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:25:42.195025    7952 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:25:42.202116    7952 start.go:297] selected driver: qemu2
	I0923 03:25:42.202123    7952 start.go:901] validating driver "qemu2" against &{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:25:42.202196    7952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:25:42.204701    7952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:25:42.204728    7952 cni.go:84] Creating CNI manager for ""
	I0923 03:25:42.204753    7952 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 03:25:42.204799    7952 start.go:340] cluster config:
	{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:25:42.208632    7952 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:25:42.216095    7952 out.go:177] * Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	I0923 03:25:42.219924    7952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:25:42.219944    7952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:25:42.219950    7952 cache.go:56] Caching tarball of preloaded images
	I0923 03:25:42.220021    7952 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:25:42.220027    7952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:25:42.220085    7952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/ha-301000/config.json ...
	I0923 03:25:42.220518    7952 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:25:42.220557    7952 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-301000"
	I0923 03:25:42.220567    7952 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:25:42.220571    7952 fix.go:54] fixHost starting: 
	I0923 03:25:42.220695    7952 fix.go:112] recreateIfNeeded on ha-301000: state=Stopped err=<nil>
	W0923 03:25:42.220707    7952 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:25:42.228105    7952 out.go:177] * Restarting existing qemu2 VM for "ha-301000" ...
	I0923 03:25:42.232053    7952 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:25:42.232096    7952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:1b:24:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:25:42.234390    7952 main.go:141] libmachine: STDOUT: 
	I0923 03:25:42.234409    7952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:25:42.234441    7952 fix.go:56] duration metric: took 13.867833ms for fixHost
	I0923 03:25:42.234445    7952 start.go:83] releasing machines lock for "ha-301000", held for 13.884292ms
	W0923 03:25:42.234452    7952 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:25:42.234485    7952 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:25:42.234490    7952 start.go:729] Will try again in 5 seconds ...
	I0923 03:25:47.236603    7952 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:25:47.237016    7952 start.go:364] duration metric: took 292.584µs to acquireMachinesLock for "ha-301000"
	I0923 03:25:47.237135    7952 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:25:47.237155    7952 fix.go:54] fixHost starting: 
	I0923 03:25:47.237864    7952 fix.go:112] recreateIfNeeded on ha-301000: state=Stopped err=<nil>
	W0923 03:25:47.237891    7952 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:25:47.242340    7952 out.go:177] * Restarting existing qemu2 VM for "ha-301000" ...
	I0923 03:25:47.245259    7952 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:25:47.245513    7952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:1b:24:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:25:47.254397    7952 main.go:141] libmachine: STDOUT: 
	I0923 03:25:47.254481    7952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:25:47.254578    7952 fix.go:56] duration metric: took 17.42325ms for fixHost
	I0923 03:25:47.254598    7952 start.go:83] releasing machines lock for "ha-301000", held for 17.555708ms
	W0923 03:25:47.254829    7952 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:25:47.262407    7952 out.go:201] 
	W0923 03:25:47.266297    7952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:25:47.266323    7952 out.go:270] * 
	* 
	W0923 03:25:47.269115    7952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:25:47.278241    7952 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-301000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-301000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (32.598542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.772583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:47.422957    7964 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:47.423378    7964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:47.423382    7964 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:47.423385    7964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:47.423530    7964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:47.423756    7964 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:47.423961    7964 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:47.428336    7964 out.go:177] * The control-plane node ha-301000 host is not running: state=Stopped
	I0923 03:25:47.431163    7964 out.go:177]   To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-301000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (30.76825ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:47.464251    7966 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:47.464394    7966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:47.464397    7966 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:47.464399    7966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:47.464529    7966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:47.464647    7966 out.go:352] Setting JSON to false
	I0923 03:25:47.464657    7966 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:47.464720    7966 notify.go:220] Checking for updates...
	I0923 03:25:47.464854    7966 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:47.464865    7966 status.go:174] checking status of ha-301000 ...
	I0923 03:25:47.465085    7966 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:47.465088    7966 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:47.465091    7966 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.601084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-301000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.095625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-301000 stop -v=7 --alsologtostderr: (1.908764083s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr: exit status 7 (66.95ms)

                                                
                                                
-- stdout --
	ha-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:49.547917    7990 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:49.548100    7990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:49.548104    7990 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:49.548107    7990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:49.548283    7990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:49.548429    7990 out.go:352] Setting JSON to false
	I0923 03:25:49.548442    7990 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:49.548481    7990 notify.go:220] Checking for updates...
	I0923 03:25:49.548695    7990 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:49.548705    7990 status.go:174] checking status of ha-301000 ...
	I0923 03:25:49.549035    7990 status.go:364] ha-301000 host status = "Stopped" (err=<nil>)
	I0923 03:25:49.549040    7990 status.go:377] host is not running, skipping remaining checks
	I0923 03:25:49.549042    7990 status.go:176] ha-301000 status: &{Name:ha-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-301000 status -v=7 --alsologtostderr": ha-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (32.791333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-301000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-301000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.189055042s)

                                                
                                                
-- stdout --
	* [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	* Restarting existing qemu2 VM for "ha-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:49.611622    7994 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:49.611752    7994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:49.611756    7994 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:49.611758    7994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:49.611863    7994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:49.612895    7994 out.go:352] Setting JSON to false
	I0923 03:25:49.629533    7994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5120,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:25:49.629606    7994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:25:49.634975    7994 out.go:177] * [ha-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:25:49.643030    7994 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:25:49.643104    7994 notify.go:220] Checking for updates...
	I0923 03:25:49.650928    7994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:25:49.653936    7994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:25:49.657902    7994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:25:49.660954    7994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:25:49.663918    7994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:25:49.667222    7994 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:49.667482    7994 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:25:49.671900    7994 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:25:49.678883    7994 start.go:297] selected driver: qemu2
	I0923 03:25:49.678897    7994 start.go:901] validating driver "qemu2" against &{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:25:49.678955    7994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:25:49.681347    7994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:25:49.681377    7994 cni.go:84] Creating CNI manager for ""
	I0923 03:25:49.681401    7994 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 03:25:49.681444    7994 start.go:340] cluster config:
	{Name:ha-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-301000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:25:49.685077    7994 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:25:49.693858    7994 out.go:177] * Starting "ha-301000" primary control-plane node in "ha-301000" cluster
	I0923 03:25:49.697896    7994 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:25:49.697920    7994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:25:49.697927    7994 cache.go:56] Caching tarball of preloaded images
	I0923 03:25:49.697976    7994 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:25:49.697981    7994 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:25:49.698033    7994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/ha-301000/config.json ...
	I0923 03:25:49.698470    7994 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:25:49.698499    7994 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "ha-301000"
	I0923 03:25:49.698509    7994 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:25:49.698514    7994 fix.go:54] fixHost starting: 
	I0923 03:25:49.698637    7994 fix.go:112] recreateIfNeeded on ha-301000: state=Stopped err=<nil>
	W0923 03:25:49.698646    7994 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:25:49.706907    7994 out.go:177] * Restarting existing qemu2 VM for "ha-301000" ...
	I0923 03:25:49.710954    7994 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:25:49.710983    7994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:1b:24:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:25:49.712909    7994 main.go:141] libmachine: STDOUT: 
	I0923 03:25:49.712930    7994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:25:49.712961    7994 fix.go:56] duration metric: took 14.447458ms for fixHost
	I0923 03:25:49.712967    7994 start.go:83] releasing machines lock for "ha-301000", held for 14.463417ms
	W0923 03:25:49.712973    7994 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:25:49.713007    7994 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:25:49.713012    7994 start.go:729] Will try again in 5 seconds ...
	I0923 03:25:54.715165    7994 start.go:360] acquireMachinesLock for ha-301000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:25:54.715529    7994 start.go:364] duration metric: took 298.458µs to acquireMachinesLock for "ha-301000"
	I0923 03:25:54.715643    7994 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:25:54.715660    7994 fix.go:54] fixHost starting: 
	I0923 03:25:54.716376    7994 fix.go:112] recreateIfNeeded on ha-301000: state=Stopped err=<nil>
	W0923 03:25:54.716402    7994 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:25:54.720785    7994 out.go:177] * Restarting existing qemu2 VM for "ha-301000" ...
	I0923 03:25:54.727762    7994 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:25:54.727999    7994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:1b:24:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/ha-301000/disk.qcow2
	I0923 03:25:54.736838    7994 main.go:141] libmachine: STDOUT: 
	I0923 03:25:54.736907    7994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:25:54.736982    7994 fix.go:56] duration metric: took 21.318875ms for fixHost
	I0923 03:25:54.737004    7994 start.go:83] releasing machines lock for "ha-301000", held for 21.450709ms
	W0923 03:25:54.737230    7994 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:25:54.744760    7994 out.go:201] 
	W0923 03:25:54.748745    7994 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:25:54.748793    7994 out.go:270] * 
	* 
	W0923 03:25:54.751485    7994 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:25:54.759766    7994 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-301000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (71.959417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-301000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.927417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-301000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-301000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.577166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:25:54.956804    8009 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:25:54.956960    8009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:54.956963    8009 out.go:358] Setting ErrFile to fd 2...
	I0923 03:25:54.956965    8009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:25:54.957078    8009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:25:54.957316    8009 mustload.go:65] Loading cluster: ha-301000
	I0923 03:25:54.957515    8009 config.go:182] Loaded profile config "ha-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:25:54.961617    8009 out.go:177] * The control-plane node ha-301000 host is not running: state=Stopped
	I0923 03:25:54.965647    8009 out.go:177]   To start a cluster, run: "minikube start -p ha-301000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-301000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (30.767375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-301000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-301000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-301000 -n ha-301000: exit status 7 (31.768375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-430000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-430000 --driver=qemu2 : exit status 80 (9.864683875s)

                                                
                                                
-- stdout --
	* [image-430000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-430000" primary control-plane node in "image-430000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-430000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-430000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-430000 -n image-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-430000 -n image-430000: exit status 7 (68.746417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.963054542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33831dce-c063-4b74-bc03-c7d015470461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-370000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a95ec30-66c7-45bf-baf6-46865e556386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"7c4f366c-bfc9-45e3-b3b7-0f6754aa6d1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig"}}
	{"specversion":"1.0","id":"a3e23069-b0db-425b-b17e-161128e8dc48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6ea86de9-520c-4218-a56f-acd31af65237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c33a86c0-1d81-4687-abd9-31a950d9f555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube"}}
	{"specversion":"1.0","id":"962967d0-9bea-418f-965c-e0a7589c7a06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"084e5458-d348-418e-b682-c304f6b6d2e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f30fec40-6cbd-48b7-a450-aae64ca58ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f97761af-2fc7-444c-92a9-9c7659ce6ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-370000\" primary control-plane node in \"json-output-370000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f53a2a2-3d52-43d8-ae31-109bafcc83be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"43cb7038-d73c-42d5-950a-ed1ea7ebcfbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-370000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"918a25fb-0b0e-4afe-81e5-4593870323c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f7f9b75d-6f7b-4b2d-80c2-5463aa122ce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a21c7695-825f-490c-b868-c107c9a69664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-370000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9ee51523-cddd-4143-80fc-fd46b93e0942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"663637fd-6c79-4627-a2d5-c7e7789ada7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.96s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser: exit status 83 (80.312458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6126c3c-ccf9-4847-9cf3-45179f499073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-370000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6c65b812-9cc3-4f2f-9c4e-a598901585fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-370000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser: exit status 83 (46.231542ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-370000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-370000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-370000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 : exit status 80 (9.99719125s)

                                                
                                                
-- stdout --
	* [first-674000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-674000" primary control-plane node in "first-674000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 03:26:28.787307 -0700 PDT m=+443.603816626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-675000 -n second-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-675000 -n second-675000: exit status 85 (83.140417ms)

                                                
                                                
-- stdout --
	* Profile "second-675000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-675000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-675000" host is not running, skipping log retrieval (state="* Profile \"second-675000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-675000\"")
helpers_test.go:175: Cleaning up "second-675000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-675000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 03:26:28.979504 -0700 PDT m=+443.796017084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-674000 -n first-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-674000 -n first-674000: exit status 7 (30.464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-674000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-674000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-674000
--- FAIL: TestMinikubeProfile (10.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-678000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-678000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.047252667s)

                                                
                                                
-- stdout --
	* [mount-start-1-678000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-678000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-678000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-678000 -n mount-start-1-678000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-678000 -n mount-start-1-678000: exit status 7 (70.039875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-678000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-896000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-896000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.839562042s)

                                                
                                                
-- stdout --
	* [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:26:39.414679    8151 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:26:39.414819    8151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:26:39.414822    8151 out.go:358] Setting ErrFile to fd 2...
	I0923 03:26:39.414825    8151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:26:39.414950    8151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:26:39.416020    8151 out.go:352] Setting JSON to false
	I0923 03:26:39.432190    8151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5170,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:26:39.432257    8151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:26:39.436946    8151 out.go:177] * [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:26:39.443847    8151 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:26:39.443912    8151 notify.go:220] Checking for updates...
	I0923 03:26:39.449895    8151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:26:39.452858    8151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:26:39.455927    8151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:26:39.458906    8151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:26:39.460392    8151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:26:39.464055    8151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:26:39.467905    8151 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:26:39.472903    8151 start.go:297] selected driver: qemu2
	I0923 03:26:39.472910    8151 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:26:39.472917    8151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:26:39.475297    8151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:26:39.477875    8151 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:26:39.480970    8151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:26:39.480988    8151 cni.go:84] Creating CNI manager for ""
	I0923 03:26:39.481007    8151 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 03:26:39.481013    8151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 03:26:39.481040    8151 start.go:340] cluster config:
	{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:26:39.484904    8151 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:26:39.491922    8151 out.go:177] * Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	I0923 03:26:39.495831    8151 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:26:39.495846    8151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:26:39.495852    8151 cache.go:56] Caching tarball of preloaded images
	I0923 03:26:39.495906    8151 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:26:39.495912    8151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:26:39.496115    8151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/multinode-896000/config.json ...
	I0923 03:26:39.496127    8151 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/multinode-896000/config.json: {Name:mk051268a23d09e964956fc81d017c3e4b219d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:26:39.496350    8151 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:26:39.496384    8151 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "multinode-896000"
	I0923 03:26:39.496397    8151 start.go:93] Provisioning new machine with config: &{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:26:39.496425    8151 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:26:39.504860    8151 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:26:39.522986    8151 start.go:159] libmachine.API.Create for "multinode-896000" (driver="qemu2")
	I0923 03:26:39.523014    8151 client.go:168] LocalClient.Create starting
	I0923 03:26:39.523075    8151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:26:39.523107    8151 main.go:141] libmachine: Decoding PEM data...
	I0923 03:26:39.523116    8151 main.go:141] libmachine: Parsing certificate...
	I0923 03:26:39.523153    8151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:26:39.523175    8151 main.go:141] libmachine: Decoding PEM data...
	I0923 03:26:39.523184    8151 main.go:141] libmachine: Parsing certificate...
	I0923 03:26:39.523522    8151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:26:39.693978    8151 main.go:141] libmachine: Creating SSH key...
	I0923 03:26:39.760313    8151 main.go:141] libmachine: Creating Disk image...
	I0923 03:26:39.760318    8151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:26:39.760500    8151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:39.769573    8151 main.go:141] libmachine: STDOUT: 
	I0923 03:26:39.769588    8151 main.go:141] libmachine: STDERR: 
	I0923 03:26:39.769654    8151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2 +20000M
	I0923 03:26:39.777651    8151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:26:39.777666    8151 main.go:141] libmachine: STDERR: 
	I0923 03:26:39.777685    8151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:39.777689    8151 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:26:39.777703    8151 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:26:39.777731    8151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:73:74:a5:74:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:39.779363    8151 main.go:141] libmachine: STDOUT: 
	I0923 03:26:39.779376    8151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:26:39.779396    8151 client.go:171] duration metric: took 256.381583ms to LocalClient.Create
	I0923 03:26:41.781529    8151 start.go:128] duration metric: took 2.285131958s to createHost
	I0923 03:26:41.781604    8151 start.go:83] releasing machines lock for "multinode-896000", held for 2.28526075s
	W0923 03:26:41.781723    8151 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:26:41.795828    8151 out.go:177] * Deleting "multinode-896000" in qemu2 ...
	W0923 03:26:41.826421    8151 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:26:41.826443    8151 start.go:729] Will try again in 5 seconds ...
	I0923 03:26:46.828589    8151 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:26:46.829026    8151 start.go:364] duration metric: took 347.25µs to acquireMachinesLock for "multinode-896000"
	I0923 03:26:46.829138    8151 start.go:93] Provisioning new machine with config: &{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:26:46.829437    8151 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:26:46.846201    8151 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:26:46.898089    8151 start.go:159] libmachine.API.Create for "multinode-896000" (driver="qemu2")
	I0923 03:26:46.898137    8151 client.go:168] LocalClient.Create starting
	I0923 03:26:46.898245    8151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:26:46.898313    8151 main.go:141] libmachine: Decoding PEM data...
	I0923 03:26:46.898330    8151 main.go:141] libmachine: Parsing certificate...
	I0923 03:26:46.898391    8151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:26:46.898435    8151 main.go:141] libmachine: Decoding PEM data...
	I0923 03:26:46.898451    8151 main.go:141] libmachine: Parsing certificate...
	I0923 03:26:46.899079    8151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:26:47.078705    8151 main.go:141] libmachine: Creating SSH key...
	I0923 03:26:47.159522    8151 main.go:141] libmachine: Creating Disk image...
	I0923 03:26:47.159528    8151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:26:47.159721    8151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:47.169048    8151 main.go:141] libmachine: STDOUT: 
	I0923 03:26:47.169067    8151 main.go:141] libmachine: STDERR: 
	I0923 03:26:47.169126    8151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2 +20000M
	I0923 03:26:47.176929    8151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:26:47.176944    8151 main.go:141] libmachine: STDERR: 
	I0923 03:26:47.176954    8151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:47.176963    8151 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:26:47.176971    8151 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:26:47.176999    8151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a3:ab:fd:fb:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:26:47.178681    8151 main.go:141] libmachine: STDOUT: 
	I0923 03:26:47.178697    8151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:26:47.178711    8151 client.go:171] duration metric: took 280.574375ms to LocalClient.Create
	I0923 03:26:49.180846    8151 start.go:128] duration metric: took 2.351425458s to createHost
	I0923 03:26:49.180946    8151 start.go:83] releasing machines lock for "multinode-896000", held for 2.351916084s
	W0923 03:26:49.181306    8151 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:26:49.194910    8151 out.go:201] 
	W0923 03:26:49.199048    8151 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:26:49.199074    8151 out.go:270] * 
	* 
	W0923 03:26:49.201756    8151 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:26:49.212974    8151 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-896000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (68.093083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (113.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.59425ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-896000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- rollout status deployment/busybox: exit status 1 (57.232542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.177916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:26:49.470356    7121 retry.go:31] will retry after 1.291533073s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.572ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:26:50.868785    7121 retry.go:31] will retry after 1.246843138s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.758917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:26:52.221676    7121 retry.go:31] will retry after 2.727140877s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.080875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:26:55.055291    7121 retry.go:31] will retry after 3.26300338s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.476666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:26:58.424137    7121 retry.go:31] will retry after 3.643186435s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.828625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:27:02.176506    7121 retry.go:31] will retry after 5.116042034s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.174959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:27:07.399028    7121 retry.go:31] will retry after 14.752026584s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.420667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:27:22.258506    7121 retry.go:31] will retry after 19.85746737s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.611ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:27:42.222668    7121 retry.go:31] will retry after 21.336935512s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.004792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 03:28:03.666664    7121 retry.go:31] will retry after 39.216862121s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.992ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.3115ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.999167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.833375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.531166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.861417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (113.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-896000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.055875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (29.838375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-896000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-896000 -v 3 --alsologtostderr: exit status 83 (42.465666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-896000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-896000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:43.367414    8231 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:43.367571    8231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.367574    8231 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:43.367577    8231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.367728    8231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:43.367971    8231 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:43.368190    8231 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:43.372924    8231 out.go:177] * The control-plane node multinode-896000 host is not running: state=Stopped
	I0923 03:28:43.376136    8231 out.go:177]   To start a cluster, run: "minikube start -p multinode-896000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-896000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.630416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-896000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-896000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.702958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-896000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-896000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-896000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.64075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-896000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-896000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-896000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-896000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (29.653375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status --output json --alsologtostderr: exit status 7 (30.670084ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-896000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:43.575494    8243 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:43.575890    8243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.575895    8243 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:43.575898    8243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.576081    8243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:43.576236    8243 out.go:352] Setting JSON to true
	I0923 03:28:43.576246    8243 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:43.576418    8243 notify.go:220] Checking for updates...
	I0923 03:28:43.576677    8243 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:43.576688    8243 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:43.576939    8243 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:43.576944    8243 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:43.576946    8243 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-896000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.479583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 node stop m03: exit status 85 (45.968625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-896000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status: exit status 7 (30.884458ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr: exit status 7 (30.563375ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:43.714927    8251 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:43.715111    8251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.715114    8251 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:43.715119    8251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.715260    8251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:43.715374    8251 out.go:352] Setting JSON to false
	I0923 03:28:43.715385    8251 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:43.715448    8251 notify.go:220] Checking for updates...
	I0923 03:28:43.715587    8251 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:43.715597    8251 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:43.715825    8251 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:43.715828    8251 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:43.715830    8251 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr": multinode-896000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (31.181292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.725375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:43.777284    8255 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:43.777661    8255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.777664    8255 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:43.777667    8255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.777818    8255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:43.778049    8255 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:43.778238    8255 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:43.782187    8255 out.go:201] 
	W0923 03:28:43.786149    8255 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0923 03:28:43.786155    8255 out.go:270] * 
	* 
	W0923 03:28:43.788210    8255 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:28:43.792226    8255 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0923 03:28:43.777284    8255 out.go:345] Setting OutFile to fd 1 ...
I0923 03:28:43.777661    8255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:28:43.777664    8255 out.go:358] Setting ErrFile to fd 2...
I0923 03:28:43.777667    8255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 03:28:43.777818    8255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
I0923 03:28:43.778049    8255 mustload.go:65] Loading cluster: multinode-896000
I0923 03:28:43.778238    8255 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 03:28:43.782187    8255 out.go:201] 
W0923 03:28:43.786149    8255 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0923 03:28:43.786155    8255 out.go:270] * 
* 
W0923 03:28:43.788210    8255 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 03:28:43.792226    8255 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-896000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (30.673084ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:43.826026    8257 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:43.826162    8257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.826165    8257 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:43.826167    8257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:43.826287    8257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:43.826411    8257 out.go:352] Setting JSON to false
	I0923 03:28:43.826422    8257 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:43.826488    8257 notify.go:220] Checking for updates...
	I0923 03:28:43.826644    8257 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:43.826653    8257 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:43.826906    8257 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:43.826910    8257 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:43.826912    8257 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:43.827791    7121 retry.go:31] will retry after 650.384671ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (74.569291ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:44.552887    8259 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:44.553060    8259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:44.553064    8259 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:44.553067    8259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:44.553235    8259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:44.553398    8259 out.go:352] Setting JSON to false
	I0923 03:28:44.553410    8259 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:44.553454    8259 notify.go:220] Checking for updates...
	I0923 03:28:44.553684    8259 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:44.553695    8259 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:44.554029    8259 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:44.554034    8259 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:44.554037    8259 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:44.555080    7121 retry.go:31] will retry after 852.075277ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (73.809458ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:45.481070    8261 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:45.481272    8261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:45.481276    8261 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:45.481280    8261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:45.481468    8261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:45.481615    8261 out.go:352] Setting JSON to false
	I0923 03:28:45.481631    8261 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:45.481688    8261 notify.go:220] Checking for updates...
	I0923 03:28:45.481917    8261 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:45.481930    8261 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:45.482237    8261 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:45.482243    8261 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:45.482245    8261 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:45.483383    7121 retry.go:31] will retry after 2.650029078s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (76.842375ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:48.210438    8263 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:48.210642    8263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:48.210646    8263 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:48.210650    8263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:48.210815    8263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:48.210973    8263 out.go:352] Setting JSON to false
	I0923 03:28:48.210985    8263 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:48.211026    8263 notify.go:220] Checking for updates...
	I0923 03:28:48.211269    8263 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:48.211284    8263 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:48.211604    8263 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:48.211609    8263 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:48.211611    8263 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:48.212704    7121 retry.go:31] will retry after 2.14644709s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (76.146833ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:50.435137    8265 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:50.435342    8265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:50.435347    8265 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:50.435350    8265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:50.435544    8265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:50.435714    8265 out.go:352] Setting JSON to false
	I0923 03:28:50.435732    8265 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:50.435778    8265 notify.go:220] Checking for updates...
	I0923 03:28:50.436005    8265 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:50.436017    8265 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:50.436337    8265 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:50.436343    8265 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:50.436346    8265 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:50.437676    7121 retry.go:31] will retry after 6.612694507s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (73.599333ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:28:57.123993    8269 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:28:57.124200    8269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:57.124204    8269 out.go:358] Setting ErrFile to fd 2...
	I0923 03:28:57.124207    8269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:28:57.124402    8269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:28:57.124561    8269 out.go:352] Setting JSON to false
	I0923 03:28:57.124574    8269 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:28:57.124613    8269 notify.go:220] Checking for updates...
	I0923 03:28:57.124828    8269 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:28:57.124838    8269 status.go:174] checking status of multinode-896000 ...
	I0923 03:28:57.125155    8269 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:28:57.125160    8269 status.go:377] host is not running, skipping remaining checks
	I0923 03:28:57.125163    8269 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:28:57.126343    7121 retry.go:31] will retry after 8.095217624s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (72.764583ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:05.293087    8271 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:05.293296    8271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:05.293301    8271 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:05.293304    8271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:05.293477    8271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:05.293623    8271 out.go:352] Setting JSON to false
	I0923 03:29:05.293637    8271 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:29:05.293678    8271 notify.go:220] Checking for updates...
	I0923 03:29:05.293916    8271 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:05.293929    8271 status.go:174] checking status of multinode-896000 ...
	I0923 03:29:05.294240    8271 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:29:05.294245    8271 status.go:377] host is not running, skipping remaining checks
	I0923 03:29:05.294248    8271 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:29:05.295310    7121 retry.go:31] will retry after 16.708071156s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (74.983875ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:22.078201    8279 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:22.078416    8279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:22.078420    8279 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:22.078424    8279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:22.078624    8279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:22.078807    8279 out.go:352] Setting JSON to false
	I0923 03:29:22.078825    8279 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:29:22.078865    8279 notify.go:220] Checking for updates...
	I0923 03:29:22.079119    8279 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:22.079133    8279 status.go:174] checking status of multinode-896000 ...
	I0923 03:29:22.079446    8279 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:29:22.079451    8279 status.go:377] host is not running, skipping remaining checks
	I0923 03:29:22.079454    8279 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 03:29:22.080598    7121 retry.go:31] will retry after 16.414218027s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr: exit status 7 (74.608875ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:38.569424    8285 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:38.569644    8285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:38.569648    8285 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:38.569651    8285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:38.569824    8285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:38.569994    8285 out.go:352] Setting JSON to false
	I0923 03:29:38.570008    8285 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:29:38.570060    8285 notify.go:220] Checking for updates...
	I0923 03:29:38.570284    8285 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:38.570302    8285 status.go:174] checking status of multinode-896000 ...
	I0923 03:29:38.570606    8285 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:29:38.570611    8285 status.go:377] host is not running, skipping remaining checks
	I0923 03:29:38.570614    8285 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-896000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (33.521042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-896000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-896000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-896000: (3.078646583s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-896000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-896000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221750291s)

                                                
                                                
-- stdout --
	* [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	* Restarting existing qemu2 VM for "multinode-896000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-896000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:41.778789    8309 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:41.778977    8309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:41.778982    8309 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:41.778984    8309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:41.779151    8309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:41.780417    8309 out.go:352] Setting JSON to false
	I0923 03:29:41.799502    8309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5352,"bootTime":1727082029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:29:41.799572    8309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:29:41.804397    8309 out.go:177] * [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:29:41.812417    8309 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:29:41.812465    8309 notify.go:220] Checking for updates...
	I0923 03:29:41.818351    8309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:29:41.821363    8309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:29:41.824319    8309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:29:41.827361    8309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:29:41.830381    8309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:29:41.833645    8309 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:41.833700    8309 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:29:41.838313    8309 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:29:41.845317    8309 start.go:297] selected driver: qemu2
	I0923 03:29:41.845323    8309 start.go:901] validating driver "qemu2" against &{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:29:41.845391    8309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:29:41.847969    8309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:29:41.847995    8309 cni.go:84] Creating CNI manager for ""
	I0923 03:29:41.848024    8309 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 03:29:41.848072    8309 start.go:340] cluster config:
	{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:29:41.852005    8309 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:29:41.859207    8309 out.go:177] * Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	I0923 03:29:41.863304    8309 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:29:41.863318    8309 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:29:41.863326    8309 cache.go:56] Caching tarball of preloaded images
	I0923 03:29:41.863388    8309 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:29:41.863394    8309 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:29:41.863447    8309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/multinode-896000/config.json ...
	I0923 03:29:41.863929    8309 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:29:41.863969    8309 start.go:364] duration metric: took 31.208µs to acquireMachinesLock for "multinode-896000"
	I0923 03:29:41.863980    8309 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:29:41.863984    8309 fix.go:54] fixHost starting: 
	I0923 03:29:41.864120    8309 fix.go:112] recreateIfNeeded on multinode-896000: state=Stopped err=<nil>
	W0923 03:29:41.864130    8309 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:29:41.871377    8309 out.go:177] * Restarting existing qemu2 VM for "multinode-896000" ...
	I0923 03:29:41.875347    8309 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:29:41.875394    8309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a3:ab:fd:fb:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:29:41.877628    8309 main.go:141] libmachine: STDOUT: 
	I0923 03:29:41.877663    8309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:29:41.877694    8309 fix.go:56] duration metric: took 13.707292ms for fixHost
	I0923 03:29:41.877700    8309 start.go:83] releasing machines lock for "multinode-896000", held for 13.726041ms
	W0923 03:29:41.877708    8309 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:29:41.877742    8309 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:29:41.877748    8309 start.go:729] Will try again in 5 seconds ...
	I0923 03:29:46.879740    8309 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:29:46.880162    8309 start.go:364] duration metric: took 348.792µs to acquireMachinesLock for "multinode-896000"
	I0923 03:29:46.880286    8309 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:29:46.880304    8309 fix.go:54] fixHost starting: 
	I0923 03:29:46.881015    8309 fix.go:112] recreateIfNeeded on multinode-896000: state=Stopped err=<nil>
	W0923 03:29:46.881042    8309 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:29:46.889399    8309 out.go:177] * Restarting existing qemu2 VM for "multinode-896000" ...
	I0923 03:29:46.892421    8309 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:29:46.892583    8309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a3:ab:fd:fb:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:29:46.901446    8309 main.go:141] libmachine: STDOUT: 
	I0923 03:29:46.901547    8309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:29:46.901629    8309 fix.go:56] duration metric: took 21.325416ms for fixHost
	I0923 03:29:46.901655    8309 start.go:83] releasing machines lock for "multinode-896000", held for 21.46875ms
	W0923 03:29:46.901873    8309 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:29:46.908480    8309 out.go:201] 
	W0923 03:29:46.912465    8309 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:29:46.912495    8309 out.go:270] * 
	* 
	W0923 03:29:46.915088    8309 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:29:46.923482    8309 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-896000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-896000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (33.215709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 node delete m03: exit status 83 (41.085625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-896000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-896000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-896000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr: exit status 7 (30.465ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:47.109167    8323 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:47.109294    8323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:47.109298    8323 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:47.109300    8323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:47.109435    8323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:47.109580    8323 out.go:352] Setting JSON to false
	I0923 03:29:47.109590    8323 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:29:47.109654    8323 notify.go:220] Checking for updates...
	I0923 03:29:47.109790    8323 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:47.109797    8323 status.go:174] checking status of multinode-896000 ...
	I0923 03:29:47.110022    8323 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:29:47.110026    8323 status.go:377] host is not running, skipping remaining checks
	I0923 03:29:47.110028    8323 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (31.291292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-896000 stop: (3.559867333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status: exit status 7 (64.383709ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr: exit status 7 (32.916125ms)

                                                
                                                
-- stdout --
	multinode-896000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:50.798322    8347 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:50.798486    8347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:50.798489    8347 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:50.798491    8347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:50.798632    8347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:50.798771    8347 out.go:352] Setting JSON to false
	I0923 03:29:50.798782    8347 mustload.go:65] Loading cluster: multinode-896000
	I0923 03:29:50.798848    8347 notify.go:220] Checking for updates...
	I0923 03:29:50.798983    8347 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:50.798990    8347 status.go:174] checking status of multinode-896000 ...
	I0923 03:29:50.799235    8347 status.go:364] multinode-896000 host status = "Stopped" (err=<nil>)
	I0923 03:29:50.799239    8347 status.go:377] host is not running, skipping remaining checks
	I0923 03:29:50.799241    8347 status.go:176] multinode-896000 status: &{Name:multinode-896000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr": multinode-896000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-896000 status --alsologtostderr": multinode-896000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.348584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-896000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-896000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177905375s)

                                                
                                                
-- stdout --
	* [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	* Restarting existing qemu2 VM for "multinode-896000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-896000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:29:50.859622    8351 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:29:50.859773    8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:50.859776    8351 out.go:358] Setting ErrFile to fd 2...
	I0923 03:29:50.859779    8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:29:50.859908    8351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:29:50.860947    8351 out.go:352] Setting JSON to false
	I0923 03:29:50.876973    8351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5361,"bootTime":1727082029,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:29:50.877035    8351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:29:50.882083    8351 out.go:177] * [multinode-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:29:50.889877    8351 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:29:50.889925    8351 notify.go:220] Checking for updates...
	I0923 03:29:50.897019    8351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:29:50.898442    8351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:29:50.901983    8351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:29:50.904976    8351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:29:50.906252    8351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:29:50.909246    8351 config.go:182] Loaded profile config "multinode-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:29:50.909520    8351 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:29:50.913945    8351 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:29:50.919029    8351 start.go:297] selected driver: qemu2
	I0923 03:29:50.919035    8351 start.go:901] validating driver "qemu2" against &{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:29:50.919088    8351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:29:50.921414    8351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:29:50.921438    8351 cni.go:84] Creating CNI manager for ""
	I0923 03:29:50.921458    8351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 03:29:50.921510    8351 start.go:340] cluster config:
	{Name:multinode-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-896000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:29:50.924993    8351 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:29:50.932980    8351 out.go:177] * Starting "multinode-896000" primary control-plane node in "multinode-896000" cluster
	I0923 03:29:50.936980    8351 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:29:50.937000    8351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:29:50.937005    8351 cache.go:56] Caching tarball of preloaded images
	I0923 03:29:50.937066    8351 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:29:50.937073    8351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:29:50.937135    8351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/multinode-896000/config.json ...
	I0923 03:29:50.937569    8351 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:29:50.937597    8351 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "multinode-896000"
	I0923 03:29:50.937607    8351 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:29:50.937612    8351 fix.go:54] fixHost starting: 
	I0923 03:29:50.937724    8351 fix.go:112] recreateIfNeeded on multinode-896000: state=Stopped err=<nil>
	W0923 03:29:50.937734    8351 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:29:50.944969    8351 out.go:177] * Restarting existing qemu2 VM for "multinode-896000" ...
	I0923 03:29:50.949002    8351 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:29:50.949050    8351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a3:ab:fd:fb:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:29:50.951028    8351 main.go:141] libmachine: STDOUT: 
	I0923 03:29:50.951045    8351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:29:50.951073    8351 fix.go:56] duration metric: took 13.45975ms for fixHost
	I0923 03:29:50.951077    8351 start.go:83] releasing machines lock for "multinode-896000", held for 13.475625ms
	W0923 03:29:50.951090    8351 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:29:50.951122    8351 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:29:50.951126    8351 start.go:729] Will try again in 5 seconds ...
	I0923 03:29:55.953252    8351 start.go:360] acquireMachinesLock for multinode-896000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:29:55.953720    8351 start.go:364] duration metric: took 385.834µs to acquireMachinesLock for "multinode-896000"
	I0923 03:29:55.953862    8351 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:29:55.953880    8351 fix.go:54] fixHost starting: 
	I0923 03:29:55.954591    8351 fix.go:112] recreateIfNeeded on multinode-896000: state=Stopped err=<nil>
	W0923 03:29:55.954622    8351 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:29:55.958111    8351 out.go:177] * Restarting existing qemu2 VM for "multinode-896000" ...
	I0923 03:29:55.965072    8351 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:29:55.965393    8351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a3:ab:fd:fb:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/multinode-896000/disk.qcow2
	I0923 03:29:55.974278    8351 main.go:141] libmachine: STDOUT: 
	I0923 03:29:55.974357    8351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:29:55.974431    8351 fix.go:56] duration metric: took 20.546333ms for fixHost
	I0923 03:29:55.974446    8351 start.go:83] releasing machines lock for "multinode-896000", held for 20.703542ms
	W0923 03:29:55.974647    8351 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-896000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:29:55.981109    8351 out.go:201] 
	W0923 03:29:55.985258    8351 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:29:55.985335    8351 out.go:270] * 
	* 
	W0923 03:29:55.987927    8351 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:29:55.995055    8351 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-896000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (66.850209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-896000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-896000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-896000-m01 --driver=qemu2 : exit status 80 (10.052867875s)

                                                
                                                
-- stdout --
	* [multinode-896000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-896000-m01" primary control-plane node in "multinode-896000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-896000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-896000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-896000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-896000-m02 --driver=qemu2 : exit status 80 (10.073909209s)

                                                
                                                
-- stdout --
	* [multinode-896000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-896000-m02" primary control-plane node in "multinode-896000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-896000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-896000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-896000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-896000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-896000: exit status 83 (77.350375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-896000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-896000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-896000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-896000 -n multinode-896000: exit status 7 (30.050625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-896000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                    
x
+
TestPreload (10.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-252000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-252000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.024916042s)

                                                
                                                
-- stdout --
	* [test-preload-252000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-252000" primary control-plane node in "test-preload-252000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-252000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:30:16.561371    8709 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:30:16.561512    8709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:16.561515    8709 out.go:358] Setting ErrFile to fd 2...
	I0923 03:30:16.561517    8709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:30:16.561645    8709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:30:16.562691    8709 out.go:352] Setting JSON to false
	I0923 03:30:16.578788    8709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5387,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:30:16.578859    8709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:30:16.584261    8709 out.go:177] * [test-preload-252000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:30:16.592208    8709 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:30:16.592249    8709 notify.go:220] Checking for updates...
	I0923 03:30:16.599191    8709 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:30:16.602141    8709 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:30:16.606206    8709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:30:16.609215    8709 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:30:16.612239    8709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:30:16.615497    8709 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:30:16.615556    8709 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:30:16.619206    8709 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:30:16.626130    8709 start.go:297] selected driver: qemu2
	I0923 03:30:16.626135    8709 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:30:16.626142    8709 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:30:16.628630    8709 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:30:16.631255    8709 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:30:16.634181    8709 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:30:16.634200    8709 cni.go:84] Creating CNI manager for ""
	I0923 03:30:16.634221    8709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:30:16.634226    8709 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:30:16.634250    8709 start.go:340] cluster config:
	{Name:test-preload-252000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:30:16.638094    8709 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.645118    8709 out.go:177] * Starting "test-preload-252000" primary control-plane node in "test-preload-252000" cluster
	I0923 03:30:16.649158    8709 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0923 03:30:16.649237    8709 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/test-preload-252000/config.json ...
	I0923 03:30:16.649257    8709 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/test-preload-252000/config.json: {Name:mk121b439c32f507a800ef80275485bf1d752961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:30:16.649271    8709 cache.go:107] acquiring lock: {Name:mk9b40db5f4a4860de51bf9554609818322b049e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649268    8709 cache.go:107] acquiring lock: {Name:mk56587b4c1fcef2aab5d4e7c78145853fa5c5f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649307    8709 cache.go:107] acquiring lock: {Name:mkce9d2e2039b7acd8a294fb304060584009ccf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649477    8709 cache.go:107] acquiring lock: {Name:mkea33b50fa8a41799f541017afe17f76b34e77a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649531    8709 cache.go:107] acquiring lock: {Name:mk768d5947d0f55ea064f8290e5090197910d143 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649536    8709 cache.go:107] acquiring lock: {Name:mkb5d5c4cd67393cf98fcd354cdf0972981c2358 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649561    8709 cache.go:107] acquiring lock: {Name:mk1fc623449e8d8166cbcef622e6dd7450f12a81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.649644    8709 start.go:360] acquireMachinesLock for test-preload-252000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:30:16.649673    8709 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 03:30:16.649786    8709 start.go:364] duration metric: took 131.042µs to acquireMachinesLock for "test-preload-252000"
	I0923 03:30:16.649796    8709 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 03:30:16.649824    8709 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:30:16.649800    8709 start.go:93] Provisioning new machine with config: &{Name:test-preload-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:30:16.649841    8709 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:30:16.649852    8709 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:30:16.649856    8709 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 03:30:16.649924    8709 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 03:30:16.649556    8709 cache.go:107] acquiring lock: {Name:mk7af1fc84aa415b086225abcbdbaf517a4fb1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:30:16.650415    8709 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 03:30:16.650418    8709 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:30:16.654155    8709 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:30:16.660858    8709 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 03:30:16.661767    8709 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 03:30:16.661862    8709 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:30:16.661924    8709 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:30:16.662051    8709 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 03:30:16.662799    8709 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 03:30:16.662969    8709 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:30:16.663110    8709 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 03:30:16.671510    8709 start.go:159] libmachine.API.Create for "test-preload-252000" (driver="qemu2")
	I0923 03:30:16.671526    8709 client.go:168] LocalClient.Create starting
	I0923 03:30:16.671597    8709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:30:16.671625    8709 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:16.671633    8709 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:16.671682    8709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:30:16.671705    8709 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:16.671713    8709 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:16.672105    8709 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:30:16.841934    8709 main.go:141] libmachine: Creating SSH key...
	I0923 03:30:17.003116    8709 main.go:141] libmachine: Creating Disk image...
	I0923 03:30:17.003129    8709 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:30:17.003333    8709 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:17.012750    8709 main.go:141] libmachine: STDOUT: 
	I0923 03:30:17.012776    8709 main.go:141] libmachine: STDERR: 
	I0923 03:30:17.012845    8709 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2 +20000M
	I0923 03:30:17.020779    8709 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:30:17.020798    8709 main.go:141] libmachine: STDERR: 
	I0923 03:30:17.020809    8709 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:17.020814    8709 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:30:17.020825    8709 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:30:17.020857    8709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ae:c7:cd:cd:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:17.022601    8709 main.go:141] libmachine: STDOUT: 
	I0923 03:30:17.022615    8709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:30:17.022632    8709 client.go:171] duration metric: took 351.11ms to LocalClient.Create
	W0923 03:30:17.213655    8709 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 03:30:17.213702    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 03:30:17.219812    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 03:30:17.231865    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 03:30:17.236066    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0923 03:30:17.240646    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0923 03:30:17.255073    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0923 03:30:17.277479    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0923 03:30:17.361006    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0923 03:30:17.361056    8709 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 711.6505ms
	I0923 03:30:17.361096    8709 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0923 03:30:17.760159    8709 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 03:30:17.760258    8709 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 03:30:18.680138    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 03:30:18.680181    8709 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.030952958s
	I0923 03:30:18.680225    8709 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 03:30:19.022862    8709 start.go:128] duration metric: took 2.373039458s to createHost
	I0923 03:30:19.022954    8709 start.go:83] releasing machines lock for "test-preload-252000", held for 2.373177583s
	W0923 03:30:19.023017    8709 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:19.040170    8709 out.go:177] * Deleting "test-preload-252000" in qemu2 ...
	W0923 03:30:19.073242    8709 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:19.073265    8709 start.go:729] Will try again in 5 seconds ...
	I0923 03:30:19.459753    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0923 03:30:19.459800    8709 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.810295541s
	I0923 03:30:19.459826    8709 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0923 03:30:20.348068    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0923 03:30:20.348111    8709 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.698724542s
	I0923 03:30:20.348133    8709 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0923 03:30:21.187038    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0923 03:30:21.187090    8709 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.537936084s
	I0923 03:30:21.187146    8709 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0923 03:30:22.066268    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0923 03:30:22.066314    8709 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.417157667s
	I0923 03:30:22.066341    8709 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0923 03:30:24.049166    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0923 03:30:24.049210    8709 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.3998275s
	I0923 03:30:24.049237    8709 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0923 03:30:24.073450    8709 start.go:360] acquireMachinesLock for test-preload-252000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:30:24.073807    8709 start.go:364] duration metric: took 296.75µs to acquireMachinesLock for "test-preload-252000"
	I0923 03:30:24.073890    8709 start.go:93] Provisioning new machine with config: &{Name:test-preload-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:30:24.074100    8709 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:30:24.091981    8709 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:30:24.143398    8709 start.go:159] libmachine.API.Create for "test-preload-252000" (driver="qemu2")
	I0923 03:30:24.143437    8709 client.go:168] LocalClient.Create starting
	I0923 03:30:24.143558    8709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:30:24.143620    8709 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:24.143643    8709 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:24.143714    8709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:30:24.143758    8709 main.go:141] libmachine: Decoding PEM data...
	I0923 03:30:24.143770    8709 main.go:141] libmachine: Parsing certificate...
	I0923 03:30:24.144266    8709 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:30:24.317725    8709 main.go:141] libmachine: Creating SSH key...
	I0923 03:30:24.490450    8709 main.go:141] libmachine: Creating Disk image...
	I0923 03:30:24.490457    8709 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:30:24.490650    8709 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:24.500472    8709 main.go:141] libmachine: STDOUT: 
	I0923 03:30:24.500506    8709 main.go:141] libmachine: STDERR: 
	I0923 03:30:24.500613    8709 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2 +20000M
	I0923 03:30:24.508866    8709 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:30:24.508883    8709 main.go:141] libmachine: STDERR: 
	I0923 03:30:24.508904    8709 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:24.508909    8709 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:30:24.508919    8709 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:30:24.508949    8709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:0a:94:4b:26:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/test-preload-252000/disk.qcow2
	I0923 03:30:24.510773    8709 main.go:141] libmachine: STDOUT: 
	I0923 03:30:24.510788    8709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:30:24.510801    8709 client.go:171] duration metric: took 367.364583ms to LocalClient.Create
	I0923 03:30:26.203535    8709 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0923 03:30:26.203587    8709 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.554277916s
	I0923 03:30:26.203612    8709 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0923 03:30:26.203670    8709 cache.go:87] Successfully saved all images to host disk.
	I0923 03:30:26.511442    8709 start.go:128] duration metric: took 2.437366209s to createHost
	I0923 03:30:26.511511    8709 start.go:83] releasing machines lock for "test-preload-252000", held for 2.437736667s
	W0923 03:30:26.511827    8709 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:30:26.527364    8709 out.go:201] 
	W0923 03:30:26.532458    8709 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:30:26.532484    8709 out.go:270] * 
	* 
	W0923 03:30:26.534997    8709 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:30:26.543378    8709 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-252000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-23 03:30:26.561469 -0700 PDT m=+681.383177543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-252000 -n test-preload-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-252000 -n test-preload-252000: exit status 7 (67.584917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-252000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-252000
--- FAIL: TestPreload (10.18s)

                                                
                                    
x
+
TestScheduledStopUnix (10.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-152000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-152000 --memory=2048 --driver=qemu2 : exit status 80 (9.884389208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-152000" primary control-plane node in "scheduled-stop-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-152000" primary control-plane node in "scheduled-stop-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-23 03:30:36.594024 -0700 PDT m=+691.415951293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-152000 -n scheduled-stop-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-152000 -n scheduled-stop-152000: exit status 7 (68.502333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-152000
--- FAIL: TestScheduledStopUnix (10.03s)

                                                
                                    
x
+
TestSkaffold (12.34s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1592379487 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1592379487 version: (1.057692375s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-561000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-561000 --memory=2600 --driver=qemu2 : exit status 80 (9.984427875s)

                                                
                                                
-- stdout --
	* [skaffold-561000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-561000" primary control-plane node in "skaffold-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-561000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-561000" primary control-plane node in "skaffold-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-23 03:30:48.933329 -0700 PDT m=+703.755525959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-561000 -n skaffold-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-561000 -n skaffold-561000: exit status 7 (61.875791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-561000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-561000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-561000
--- FAIL: TestSkaffold (12.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (593.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3680202906 start -p running-upgrade-515000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3680202906 start -p running-upgrade-515000 --memory=2200 --vm-driver=qemu2 : (56.821185s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.336666625s)

                                                
                                                
-- stdout --
	* [running-upgrade-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-515000" primary control-plane node in "running-upgrade-515000" cluster
	* Updating the running qemu2 "running-upgrade-515000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:32:28.302194    9103 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:32:28.302325    9103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:32:28.302328    9103 out.go:358] Setting ErrFile to fd 2...
	I0923 03:32:28.302330    9103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:32:28.302450    9103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:32:28.303508    9103 out.go:352] Setting JSON to false
	I0923 03:32:28.319960    9103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5519,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:32:28.320041    9103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:32:28.325311    9103 out.go:177] * [running-upgrade-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:32:28.334496    9103 notify.go:220] Checking for updates...
	I0923 03:32:28.339365    9103 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:32:28.343200    9103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:32:28.347330    9103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:32:28.350356    9103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:32:28.351637    9103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:32:28.354359    9103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:32:28.357698    9103 config.go:182] Loaded profile config "running-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:32:28.361316    9103 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 03:32:28.364331    9103 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:32:28.368364    9103 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:32:28.375359    9103 start.go:297] selected driver: qemu2
	I0923 03:32:28.375365    9103 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:32:28.375427    9103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:32:28.377850    9103 cni.go:84] Creating CNI manager for ""
	I0923 03:32:28.377879    9103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:32:28.377909    9103 start.go:340] cluster config:
	{Name:running-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:32:28.377961    9103 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:32:28.386241    9103 out.go:177] * Starting "running-upgrade-515000" primary control-plane node in "running-upgrade-515000" cluster
	I0923 03:32:28.390339    9103 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:32:28.390357    9103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 03:32:28.390364    9103 cache.go:56] Caching tarball of preloaded images
	I0923 03:32:28.390436    9103 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:32:28.390443    9103 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 03:32:28.390505    9103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/config.json ...
	I0923 03:32:28.390872    9103 start.go:360] acquireMachinesLock for running-upgrade-515000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:32:28.390902    9103 start.go:364] duration metric: took 24µs to acquireMachinesLock for "running-upgrade-515000"
	I0923 03:32:28.390911    9103 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:32:28.390916    9103 fix.go:54] fixHost starting: 
	I0923 03:32:28.391575    9103 fix.go:112] recreateIfNeeded on running-upgrade-515000: state=Running err=<nil>
	W0923 03:32:28.391583    9103 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:32:28.395306    9103 out.go:177] * Updating the running qemu2 "running-upgrade-515000" VM ...
	I0923 03:32:28.403327    9103 machine.go:93] provisionDockerMachine start ...
	I0923 03:32:28.403376    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.403496    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.403502    9103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 03:32:28.473003    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-515000
	
	I0923 03:32:28.473024    9103 buildroot.go:166] provisioning hostname "running-upgrade-515000"
	I0923 03:32:28.473107    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.473263    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.473271    9103 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-515000 && echo "running-upgrade-515000" | sudo tee /etc/hostname
	I0923 03:32:28.543755    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-515000
	
	I0923 03:32:28.543805    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.543921    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.543930    9103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-515000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-515000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-515000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 03:32:28.611195    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 03:32:28.611207    9103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19689-6600/.minikube CaCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19689-6600/.minikube}
	I0923 03:32:28.611215    9103 buildroot.go:174] setting up certificates
	I0923 03:32:28.611219    9103 provision.go:84] configureAuth start
	I0923 03:32:28.611224    9103 provision.go:143] copyHostCerts
	I0923 03:32:28.611300    9103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem, removing ...
	I0923 03:32:28.611305    9103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem
	I0923 03:32:28.611427    9103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem (1078 bytes)
	I0923 03:32:28.611593    9103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem, removing ...
	I0923 03:32:28.611599    9103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem
	I0923 03:32:28.611653    9103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem (1123 bytes)
	I0923 03:32:28.611775    9103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem, removing ...
	I0923 03:32:28.611778    9103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem
	I0923 03:32:28.611818    9103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem (1675 bytes)
	I0923 03:32:28.611925    9103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-515000 san=[127.0.0.1 localhost minikube running-upgrade-515000]
	I0923 03:32:28.665799    9103 provision.go:177] copyRemoteCerts
	I0923 03:32:28.665835    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 03:32:28.665842    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:32:28.703008    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 03:32:28.709580    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 03:32:28.716792    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 03:32:28.723438    9103 provision.go:87] duration metric: took 112.212875ms to configureAuth
	I0923 03:32:28.723446    9103 buildroot.go:189] setting minikube options for container-runtime
	I0923 03:32:28.723561    9103 config.go:182] Loaded profile config "running-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:32:28.723600    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.723684    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.723691    9103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 03:32:28.786497    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 03:32:28.786504    9103 buildroot.go:70] root file system type: tmpfs
	I0923 03:32:28.786555    9103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 03:32:28.786598    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.786690    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.786725    9103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 03:32:28.855744    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 03:32:28.855788    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.855888    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.855896    9103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 03:32:28.921382    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 03:32:28.921394    9103 machine.go:96] duration metric: took 518.071208ms to provisionDockerMachine
	I0923 03:32:28.921399    9103 start.go:293] postStartSetup for "running-upgrade-515000" (driver="qemu2")
	I0923 03:32:28.921405    9103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 03:32:28.921471    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 03:32:28.921479    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:32:28.956968    9103 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 03:32:28.958290    9103 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 03:32:28.958301    9103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/addons for local assets ...
	I0923 03:32:28.958378    9103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/files for local assets ...
	I0923 03:32:28.958476    9103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem -> 71212.pem in /etc/ssl/certs
	I0923 03:32:28.958572    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 03:32:28.961592    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:32:28.972825    9103 start.go:296] duration metric: took 51.418167ms for postStartSetup
	I0923 03:32:28.972846    9103 fix.go:56] duration metric: took 581.942958ms for fixHost
	I0923 03:32:28.972901    9103 main.go:141] libmachine: Using SSH client type: native
	I0923 03:32:28.973018    9103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10462dc00] 0x104630440 <nil>  [] 0s} localhost 51236 <nil> <nil>}
	I0923 03:32:28.973022    9103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 03:32:29.039145    9103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727087549.054450264
	
	I0923 03:32:29.039154    9103 fix.go:216] guest clock: 1727087549.054450264
	I0923 03:32:29.039157    9103 fix.go:229] Guest: 2024-09-23 03:32:29.054450264 -0700 PDT Remote: 2024-09-23 03:32:28.972848 -0700 PDT m=+0.690961417 (delta=81.602264ms)
	I0923 03:32:29.039169    9103 fix.go:200] guest clock delta is within tolerance: 81.602264ms
	I0923 03:32:29.039172    9103 start.go:83] releasing machines lock for "running-upgrade-515000", held for 648.279625ms
	I0923 03:32:29.039245    9103 ssh_runner.go:195] Run: cat /version.json
	I0923 03:32:29.039255    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:32:29.039245    9103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 03:32:29.039289    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	W0923 03:32:29.039834    9103 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51371->127.0.0.1:51236: write: broken pipe
	I0923 03:32:29.039846    9103 retry.go:31] will retry after 191.680256ms: ssh: handshake failed: write tcp 127.0.0.1:51371->127.0.0.1:51236: write: broken pipe
	W0923 03:32:29.072437    9103 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 03:32:29.072492    9103 ssh_runner.go:195] Run: systemctl --version
	I0923 03:32:29.074210    9103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 03:32:29.075737    9103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 03:32:29.075765    9103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 03:32:29.078497    9103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 03:32:29.082994    9103 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 03:32:29.083001    9103 start.go:495] detecting cgroup driver to use...
	I0923 03:32:29.083125    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:32:29.088411    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 03:32:29.091409    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 03:32:29.094473    9103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 03:32:29.094501    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 03:32:29.097342    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:32:29.100572    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 03:32:29.103351    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:32:29.106476    9103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 03:32:29.109974    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 03:32:29.113364    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 03:32:29.116254    9103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 03:32:29.119048    9103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 03:32:29.122170    9103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 03:32:29.125393    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:29.217715    9103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 03:32:29.229565    9103 start.go:495] detecting cgroup driver to use...
	I0923 03:32:29.229633    9103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 03:32:29.241487    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:32:29.246492    9103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 03:32:29.252564    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:32:29.256807    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 03:32:29.261059    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:32:29.266266    9103 ssh_runner.go:195] Run: which cri-dockerd
	I0923 03:32:29.306548    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 03:32:29.309619    9103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 03:32:29.314899    9103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 03:32:29.409613    9103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 03:32:29.504267    9103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 03:32:29.504328    9103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 03:32:29.509950    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:29.588387    9103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:32:32.405258    9103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.816914042s)
	I0923 03:32:32.405337    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 03:32:32.410287    9103 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 03:32:32.416949    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:32:32.421857    9103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 03:32:32.515508    9103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 03:32:32.603609    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:32.683071    9103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 03:32:32.689957    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:32:32.694943    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:32.777703    9103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 03:32:32.818344    9103 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 03:32:32.818429    9103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 03:32:32.820800    9103 start.go:563] Will wait 60s for crictl version
	I0923 03:32:32.820874    9103 ssh_runner.go:195] Run: which crictl
	I0923 03:32:32.822672    9103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 03:32:32.835226    9103 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 03:32:32.835309    9103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:32:32.848528    9103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:32:32.870226    9103 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 03:32:32.870378    9103 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 03:32:32.871649    9103 kubeadm.go:883] updating cluster {Name:running-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 03:32:32.871688    9103 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:32:32.871733    9103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:32:32.882222    9103 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:32:32.882232    9103 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:32:32.882284    9103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:32:32.885819    9103 ssh_runner.go:195] Run: which lz4
	I0923 03:32:32.887149    9103 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 03:32:32.888349    9103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 03:32:32.888358    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 03:32:33.826681    9103 docker.go:649] duration metric: took 939.592ms to copy over tarball
	I0923 03:32:33.826747    9103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 03:32:34.997011    9103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.170275209s)
	I0923 03:32:34.997024    9103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 03:32:35.012540    9103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:32:35.015406    9103 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 03:32:35.020424    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:35.098333    9103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:32:36.312038    9103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.213707667s)
	I0923 03:32:36.312132    9103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:32:36.326271    9103 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:32:36.326281    9103 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:32:36.326286    9103 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 03:32:36.330238    9103 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:32:36.331938    9103 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:32:36.334270    9103 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:32:36.334491    9103 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:32:36.336417    9103 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:32:36.336488    9103 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:32:36.337953    9103 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:32:36.337978    9103 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:32:36.339110    9103 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:32:36.339173    9103 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:32:36.341486    9103 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 03:32:36.341561    9103 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:32:36.341610    9103 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:32:36.342218    9103 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:32:36.343724    9103 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 03:32:36.343773    9103 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:32:36.735293    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:32:36.748943    9103 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 03:32:36.748969    9103 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:32:36.749037    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:32:36.751870    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:32:36.767597    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:32:36.768992    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 03:32:36.769035    9103 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 03:32:36.769051    9103 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:32:36.769088    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:32:36.780265    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:32:36.790172    9103 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 03:32:36.790192    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 03:32:36.790195    9103 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:32:36.790257    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:32:36.792647    9103 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 03:32:36.792667    9103 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:32:36.792719    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:32:36.799907    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0923 03:32:36.802767    9103 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 03:32:36.802877    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:32:36.805289    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 03:32:36.805712    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 03:32:36.814877    9103 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 03:32:36.814898    9103 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 03:32:36.814961    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 03:32:36.817633    9103 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 03:32:36.817653    9103 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:32:36.817712    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:32:36.827603    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 03:32:36.827735    9103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 03:32:36.829727    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 03:32:36.829812    9103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:32:36.830104    9103 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 03:32:36.830116    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 03:32:36.831570    9103 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 03:32:36.831578    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 03:32:36.838601    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 03:32:36.854970    9103 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 03:32:36.854997    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 03:32:36.874223    9103 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 03:32:36.874247    9103 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:32:36.874309    9103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 03:32:36.908014    9103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 03:32:36.908034    9103 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:32:36.908039    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 03:32:36.908048    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 03:32:36.908239    9103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:32:36.959494    9103 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 03:32:36.959526    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 03:32:36.959604    9103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0923 03:32:37.162935    9103 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 03:32:37.163086    9103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:32:37.189398    9103 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:32:37.189417    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0923 03:32:37.195095    9103 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 03:32:37.195118    9103 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:32:37.195185    9103 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:32:37.342176    9103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 03:32:37.960526    9103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 03:32:37.961075    9103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:32:37.965701    9103 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 03:32:37.965753    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 03:32:38.026266    9103 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:32:38.026281    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 03:32:38.266473    9103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 03:32:38.266514    9103 cache_images.go:92] duration metric: took 1.940263875s to LoadCachedImages
	W0923 03:32:38.266562    9103 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 03:32:38.266567    9103 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 03:32:38.266622    9103 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-515000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 03:32:38.266712    9103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 03:32:38.280721    9103 cni.go:84] Creating CNI manager for ""
	I0923 03:32:38.280733    9103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:32:38.280739    9103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 03:32:38.280747    9103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-515000 NodeName:running-upgrade-515000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 03:32:38.280813    9103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-515000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 03:32:38.280876    9103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 03:32:38.283619    9103 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 03:32:38.283654    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 03:32:38.286342    9103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 03:32:38.291292    9103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 03:32:38.296390    9103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 03:32:38.301719    9103 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 03:32:38.303007    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:32:38.390576    9103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:32:38.396240    9103 certs.go:68] Setting up /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000 for IP: 10.0.2.15
	I0923 03:32:38.396260    9103 certs.go:194] generating shared ca certs ...
	I0923 03:32:38.396268    9103 certs.go:226] acquiring lock for ca certs: {Name:mk939083d37f22e3f0ca1f4aad8fa886b4374915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:32:38.396506    9103 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key
	I0923 03:32:38.396543    9103 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key
	I0923 03:32:38.396548    9103 certs.go:256] generating profile certs ...
	I0923 03:32:38.396607    9103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.key
	I0923 03:32:38.396619    9103 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key.6f6e2aee
	I0923 03:32:38.396630    9103 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt.6f6e2aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 03:32:38.466048    9103 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt.6f6e2aee ...
	I0923 03:32:38.466053    9103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt.6f6e2aee: {Name:mkc18ea96d7c31ae0c3dd6fd8f3864738b89f3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:32:38.466272    9103 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key.6f6e2aee ...
	I0923 03:32:38.466275    9103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key.6f6e2aee: {Name:mka1c4c6481054d7c5e8d6d2a5cfe9a3c3b6a149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:32:38.466398    9103 certs.go:381] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt.6f6e2aee -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt
	I0923 03:32:38.467084    9103 certs.go:385] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key.6f6e2aee -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key
	I0923 03:32:38.467363    9103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/proxy-client.key
	I0923 03:32:38.467483    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem (1338 bytes)
	W0923 03:32:38.467519    9103 certs.go:480] ignoring /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121_empty.pem, impossibly tiny 0 bytes
	I0923 03:32:38.467526    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 03:32:38.467545    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem (1078 bytes)
	I0923 03:32:38.467565    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem (1123 bytes)
	I0923 03:32:38.467583    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem (1675 bytes)
	I0923 03:32:38.467621    9103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:32:38.468010    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 03:32:38.475255    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 03:32:38.482196    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 03:32:38.489783    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 03:32:38.497204    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 03:32:38.504266    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 03:32:38.510692    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 03:32:38.517856    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 03:32:38.525431    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem --> /usr/share/ca-certificates/7121.pem (1338 bytes)
	I0923 03:32:38.532267    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /usr/share/ca-certificates/71212.pem (1708 bytes)
	I0923 03:32:38.538668    9103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 03:32:38.545822    9103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 03:32:38.550738    9103 ssh_runner.go:195] Run: openssl version
	I0923 03:32:38.552574    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7121.pem && ln -fs /usr/share/ca-certificates/7121.pem /etc/ssl/certs/7121.pem"
	I0923 03:32:38.555443    9103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7121.pem
	I0923 03:32:38.556911    9103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:19 /usr/share/ca-certificates/7121.pem
	I0923 03:32:38.556935    9103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7121.pem
	I0923 03:32:38.558596    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7121.pem /etc/ssl/certs/51391683.0"
	I0923 03:32:38.561572    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71212.pem && ln -fs /usr/share/ca-certificates/71212.pem /etc/ssl/certs/71212.pem"
	I0923 03:32:38.564423    9103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71212.pem
	I0923 03:32:38.565779    9103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:19 /usr/share/ca-certificates/71212.pem
	I0923 03:32:38.565799    9103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71212.pem
	I0923 03:32:38.567416    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71212.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 03:32:38.570306    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 03:32:38.573695    9103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:32:38.575322    9103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:32:38.575345    9103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:32:38.577011    9103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 03:32:38.579746    9103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 03:32:38.581222    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 03:32:38.582996    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 03:32:38.584878    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 03:32:38.586536    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 03:32:38.588354    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 03:32:38.590182    9103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 03:32:38.591981    9103 kubeadm.go:392] StartCluster: {Name:running-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:32:38.592055    9103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:32:38.602581    9103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 03:32:38.606096    9103 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 03:32:38.606102    9103 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 03:32:38.606132    9103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 03:32:38.609271    9103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:32:38.609307    9103 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-515000" does not appear in /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:32:38.609326    9103 kubeconfig.go:62] /Users/jenkins/minikube-integration/19689-6600/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-515000" cluster setting kubeconfig missing "running-upgrade-515000" context setting]
	I0923 03:32:38.609512    9103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:32:38.610462    9103 kapi.go:59] client config for running-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c06030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:32:38.611336    9103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 03:32:38.614425    9103 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-515000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 03:32:38.614432    9103 kubeadm.go:1160] stopping kube-system containers ...
	I0923 03:32:38.614490    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:32:38.626351    9103 docker.go:483] Stopping containers: [21ca8455e110 07ff736a8e8b dc6d88c9684a 0df852ca86bd 82480c643115 5d3d5fd4ca58 47dcc36ad92c 4bfdaa068aeb 055f29a011ae f734c17924bf bb31d0ff2285 45aa3a68aa02 4578dd259c82 955159a11e85]
	I0923 03:32:38.626429    9103 ssh_runner.go:195] Run: docker stop 21ca8455e110 07ff736a8e8b dc6d88c9684a 0df852ca86bd 82480c643115 5d3d5fd4ca58 47dcc36ad92c 4bfdaa068aeb 055f29a011ae f734c17924bf bb31d0ff2285 45aa3a68aa02 4578dd259c82 955159a11e85
	I0923 03:32:38.637098    9103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 03:32:38.706574    9103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:32:38.710539    9103 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 23 10:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 23 10:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 23 10:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 23 10:32 /etc/kubernetes/scheduler.conf
	
	I0923 03:32:38.710575    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf
	I0923 03:32:38.713887    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:32:38.713922    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:32:38.717140    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf
	I0923 03:32:38.719802    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:32:38.719828    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:32:38.722539    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf
	I0923 03:32:38.725568    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:32:38.725596    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:32:38.728457    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf
	I0923 03:32:38.730981    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:32:38.731011    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:32:38.734372    9103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:32:38.737786    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:32:38.760099    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:32:39.402238    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:32:39.612303    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:32:39.634832    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:32:39.654780    9103 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:32:39.654857    9103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:32:40.157188    9103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:32:40.656987    9103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:32:40.661749    9103 api_server.go:72] duration metric: took 1.006992459s to wait for apiserver process to appear ...
	I0923 03:32:40.661759    9103 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:32:40.661771    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:32:45.663797    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:32:45.663850    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:32:50.664171    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:32:50.664266    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:32:55.664920    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:32:55.664960    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:00.665495    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:00.665580    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:05.666897    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:05.666946    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:10.668335    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:10.668417    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:15.669617    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:15.669708    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:20.672069    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:20.672156    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:25.674717    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:25.674821    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:30.677385    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:30.677451    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:35.679693    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:35.679779    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:40.681465    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:40.681890    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:33:40.721756    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:33:40.721946    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:33:40.743361    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:33:40.743489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:33:40.765744    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:33:40.765823    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:33:40.777387    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:33:40.777456    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:33:40.787500    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:33:40.787587    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:33:40.798408    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:33:40.798504    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:33:40.809198    9103 logs.go:276] 0 containers: []
	W0923 03:33:40.809209    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:33:40.809279    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:33:40.819985    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:33:40.820001    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:33:40.820006    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:33:40.833926    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:33:40.833936    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:33:40.845585    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:33:40.845598    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:33:40.870337    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:33:40.870344    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:33:40.882297    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:33:40.882307    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:33:40.955114    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:33:40.955124    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:33:40.969901    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:33:40.969911    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:33:40.986807    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:33:40.986818    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:33:41.026970    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:33:41.026980    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:33:41.038563    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:33:41.038574    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:33:41.054374    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:33:41.054387    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:33:41.071709    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:33:41.071721    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:33:41.090044    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:33:41.090052    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:33:41.111145    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:33:41.111155    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:33:41.122690    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:33:41.122702    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:33:41.127039    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:33:41.127047    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:33:43.640816    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:48.643083    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:48.643571    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:33:48.679785    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:33:48.679942    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:33:48.702852    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:33:48.702946    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:33:48.716179    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:33:48.716270    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:33:48.728253    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:33:48.728340    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:33:48.738687    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:33:48.738772    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:33:48.752575    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:33:48.752662    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:33:48.766345    9103 logs.go:276] 0 containers: []
	W0923 03:33:48.766357    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:33:48.766419    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:33:48.776243    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:33:48.776259    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:33:48.776264    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:33:48.781256    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:33:48.781264    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:33:48.800895    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:33:48.800905    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:33:48.827821    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:33:48.827830    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:33:48.867051    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:33:48.867064    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:33:48.884517    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:33:48.884526    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:33:48.895970    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:33:48.895979    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:33:48.908643    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:33:48.908653    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:33:48.920891    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:33:48.920903    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:33:48.936111    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:33:48.936120    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:33:48.950237    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:33:48.950252    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:33:48.965786    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:33:48.965797    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:33:48.981684    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:33:48.981693    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:33:48.998542    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:33:48.998552    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:33:49.038925    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:33:49.038934    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:33:49.050154    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:33:49.050174    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:33:51.569763    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:33:56.572306    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:33:56.572616    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:33:56.596352    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:33:56.596489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:33:56.612805    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:33:56.612902    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:33:56.625725    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:33:56.625809    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:33:56.637607    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:33:56.637691    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:33:56.647851    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:33:56.647930    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:33:56.658251    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:33:56.658321    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:33:56.669872    9103 logs.go:276] 0 containers: []
	W0923 03:33:56.669882    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:33:56.669943    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:33:56.680752    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:33:56.680769    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:33:56.680774    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:33:56.719599    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:33:56.719606    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:33:56.733063    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:33:56.733074    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:33:56.750126    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:33:56.750136    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:33:56.774611    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:33:56.774621    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:33:56.778797    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:33:56.778804    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:33:56.813970    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:33:56.813984    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:33:56.833594    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:33:56.833604    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:33:56.847133    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:33:56.847145    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:33:56.858681    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:33:56.858690    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:33:56.874417    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:33:56.874427    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:33:56.890757    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:33:56.890769    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:33:56.906153    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:33:56.906161    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:33:56.917388    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:33:56.917399    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:33:56.928665    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:33:56.928678    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:33:56.942140    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:33:56.942150    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:33:59.455688    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:04.457865    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:04.458471    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:04.495771    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:04.495937    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:04.516768    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:04.516911    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:04.535870    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:04.535960    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:04.555104    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:04.555192    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:04.569262    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:04.569347    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:04.580147    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:04.580227    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:04.590425    9103 logs.go:276] 0 containers: []
	W0923 03:34:04.590439    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:04.590515    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:04.601294    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:04.601316    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:04.601321    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:04.618366    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:04.618379    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:04.658598    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:04.658609    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:04.674112    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:04.674121    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:04.690004    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:04.690014    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:04.704414    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:04.704424    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:04.716498    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:04.716508    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:04.728371    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:04.728380    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:04.733184    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:04.733189    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:04.753627    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:04.753643    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:04.767428    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:04.767437    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:04.784761    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:04.784771    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:04.801964    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:04.801976    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:04.827590    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:04.827597    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:04.862710    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:04.862720    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:04.873754    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:04.873763    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:07.387403    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:12.390084    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:12.390585    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:12.430851    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:12.431010    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:12.452763    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:12.452899    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:12.468370    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:12.468468    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:12.484457    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:12.484544    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:12.495291    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:12.495365    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:12.506028    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:12.506111    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:12.515785    9103 logs.go:276] 0 containers: []
	W0923 03:34:12.515795    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:12.515859    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:12.526676    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:12.526693    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:12.526699    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:12.538409    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:12.538422    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:12.549835    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:12.549847    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:12.575979    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:12.575987    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:12.587851    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:12.587863    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:12.619002    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:12.619012    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:12.662245    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:12.662258    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:12.676726    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:12.676737    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:12.699542    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:12.699557    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:12.740427    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:12.740434    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:12.756035    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:12.756044    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:12.767682    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:12.767693    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:12.781326    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:12.781336    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:12.797802    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:12.797812    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:12.809447    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:12.809460    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:12.823756    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:12.823770    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:15.330610    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:20.333420    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:20.333967    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:20.382765    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:20.382920    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:20.403181    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:20.403296    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:20.417278    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:20.417366    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:20.429495    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:20.429575    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:20.439905    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:20.439981    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:20.451090    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:20.451170    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:20.461720    9103 logs.go:276] 0 containers: []
	W0923 03:34:20.461730    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:20.461793    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:20.472386    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:20.472402    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:20.472407    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:20.477202    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:20.477235    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:20.490959    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:20.490969    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:20.507223    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:20.507235    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:20.519059    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:20.519068    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:20.553995    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:20.554004    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:20.566439    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:20.566450    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:20.580699    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:20.580709    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:20.597947    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:20.597957    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:20.610671    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:20.610685    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:20.652358    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:20.652370    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:20.672897    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:20.672912    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:20.694078    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:20.694087    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:20.708613    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:20.708624    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:20.723974    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:20.723985    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:20.735345    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:20.735357    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:23.263568    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:28.266197    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:28.266757    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:28.308102    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:28.308264    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:28.330922    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:28.331065    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:28.346674    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:28.346773    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:28.359375    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:28.359462    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:28.370519    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:28.370607    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:28.381387    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:28.381459    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:28.394606    9103 logs.go:276] 0 containers: []
	W0923 03:34:28.394618    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:28.394693    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:28.404934    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:28.404954    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:28.404959    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:28.419372    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:28.419385    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:28.430907    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:28.430916    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:28.442716    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:28.442730    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:28.454221    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:28.454235    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:28.472665    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:28.472678    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:28.486043    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:28.486054    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:28.501251    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:28.501262    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:28.505567    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:28.505575    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:28.539645    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:28.539655    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:28.558356    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:28.558365    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:28.574004    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:28.574013    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:28.585612    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:28.585621    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:28.611594    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:28.611602    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:28.652519    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:28.652527    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:28.671999    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:28.672009    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:31.189675    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:36.192324    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:36.192917    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:36.232612    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:36.232779    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:36.254179    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:36.254306    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:36.270024    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:36.270113    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:36.282692    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:36.282779    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:36.297393    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:36.297460    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:36.307762    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:36.307841    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:36.318290    9103 logs.go:276] 0 containers: []
	W0923 03:34:36.318302    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:36.318375    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:36.328620    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:36.328638    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:36.328643    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:36.369025    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:36.369034    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:36.383740    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:36.383748    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:36.398380    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:36.398389    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:36.409844    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:36.409852    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:36.435298    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:36.435305    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:36.449087    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:36.449097    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:36.468132    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:36.468147    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:36.482142    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:36.482151    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:36.493342    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:36.493351    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:36.527975    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:36.527986    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:36.544310    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:36.544321    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:36.562047    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:36.562059    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:36.566430    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:36.566439    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:36.582612    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:36.582623    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:36.598111    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:36.598123    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:39.111397    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:44.113633    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:44.113802    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:44.125783    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:44.125865    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:44.136576    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:44.136654    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:44.146906    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:44.146984    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:44.157702    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:44.157773    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:44.167736    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:44.167806    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:44.178062    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:44.178125    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:44.188144    9103 logs.go:276] 0 containers: []
	W0923 03:34:44.188159    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:44.188227    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:44.198452    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:44.198471    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:44.198476    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:44.217201    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:44.217212    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:44.231765    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:44.231775    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:44.243826    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:44.243838    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:44.248535    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:44.248541    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:44.282431    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:44.282442    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:44.299087    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:44.299098    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:44.313218    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:44.313230    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:44.325214    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:44.325226    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:44.363404    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:44.363413    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:44.377176    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:44.377187    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:44.388436    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:44.388449    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:44.413997    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:44.414005    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:44.434702    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:44.434712    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:44.449715    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:44.449724    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:44.467245    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:44.467256    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:46.981060    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:51.983688    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:51.984232    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:52.033339    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:52.033491    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:52.050412    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:52.050503    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:52.063051    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:52.063129    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:52.079324    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:52.079398    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:52.094010    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:52.094082    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:52.104735    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:52.104823    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:52.114682    9103 logs.go:276] 0 containers: []
	W0923 03:34:52.114695    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:52.114762    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:34:52.124862    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:34:52.124878    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:34:52.124883    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:34:52.139013    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:34:52.139026    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:34:52.150181    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:34:52.150192    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:34:52.161180    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:34:52.161194    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:34:52.172911    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:34:52.172923    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:34:52.213456    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:34:52.213463    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:34:52.217573    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:34:52.217579    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:34:52.232253    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:34:52.232263    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:34:52.256552    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:34:52.256558    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:34:52.273820    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:34:52.273830    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:34:52.285101    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:34:52.285111    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:34:52.306798    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:34:52.306814    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:34:52.323216    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:34:52.323227    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:34:52.337039    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:34:52.337049    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:34:52.348844    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:34:52.348856    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:34:52.383746    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:34:52.383755    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:34:54.899799    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:34:59.902429    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:34:59.902553    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:34:59.913587    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:34:59.913669    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:34:59.928002    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:34:59.928093    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:34:59.938767    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:34:59.938840    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:34:59.958659    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:34:59.958735    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:34:59.969362    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:34:59.969442    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:34:59.979836    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:34:59.979917    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:34:59.990527    9103 logs.go:276] 0 containers: []
	W0923 03:34:59.990539    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:34:59.990608    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:00.000816    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:00.000832    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:00.000837    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:00.016284    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:00.016295    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:00.030910    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:00.030923    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:00.051817    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:00.051828    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:00.071173    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:00.071184    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:00.087876    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:00.087891    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:00.109466    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:00.109481    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:00.133836    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:00.133843    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:00.145349    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:00.145359    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:00.160720    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:00.160736    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:00.173077    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:00.173088    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:00.190556    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:00.190566    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:00.202162    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:00.202176    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:00.238009    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:00.238020    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:00.242546    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:00.242553    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:00.254205    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:00.254215    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:02.795797    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:07.798006    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:07.798286    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:07.816503    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:07.816621    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:07.830771    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:07.830861    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:07.842529    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:07.842612    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:07.852915    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:07.852997    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:07.863931    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:07.864005    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:07.874305    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:07.874389    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:07.885099    9103 logs.go:276] 0 containers: []
	W0923 03:35:07.885115    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:07.885183    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:07.895218    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:07.895237    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:07.895243    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:07.906290    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:07.906299    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:07.910483    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:07.910492    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:07.924115    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:07.924123    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:07.943551    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:07.943562    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:07.954240    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:07.954251    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:07.967878    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:07.967889    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:07.979493    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:07.979501    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:08.003386    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:08.003393    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:08.037831    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:08.037846    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:08.054605    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:08.054616    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:08.074357    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:08.074367    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:08.101276    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:08.101285    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:08.113178    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:08.113188    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:08.124916    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:08.124930    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:08.163353    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:08.163363    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:10.678941    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:15.681495    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:15.681611    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:15.692591    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:15.692681    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:15.702837    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:15.702922    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:15.712663    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:15.712744    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:15.729833    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:15.729921    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:15.740564    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:15.740652    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:15.751318    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:15.751396    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:15.762054    9103 logs.go:276] 0 containers: []
	W0923 03:35:15.762065    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:15.762138    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:15.773221    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:15.773238    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:15.773244    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:15.787798    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:15.787811    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:15.808627    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:15.808640    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:15.827286    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:15.827302    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:15.839870    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:15.839880    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:15.859206    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:15.859222    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:15.896262    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:15.896278    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:15.912514    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:15.912526    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:15.924781    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:15.924795    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:15.937389    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:15.937401    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:15.962038    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:15.962059    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:16.002332    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:16.002348    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:16.007644    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:16.007656    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:16.027824    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:16.027840    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:16.043993    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:16.044004    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:16.055668    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:16.055681    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:18.570179    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:23.572441    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:23.572896    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:23.607602    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:23.607765    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:23.627484    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:23.627607    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:23.644153    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:23.644241    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:23.658282    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:23.658375    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:23.670185    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:23.670274    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:23.682390    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:23.682477    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:23.692551    9103 logs.go:276] 0 containers: []
	W0923 03:35:23.692563    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:23.692636    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:23.703957    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:23.703976    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:23.703982    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:23.721953    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:23.721963    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:23.747096    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:23.747107    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:23.782408    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:23.782423    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:23.802330    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:23.802342    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:23.813486    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:23.813498    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:23.833913    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:23.833924    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:23.873781    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:23.873789    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:23.890114    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:23.890124    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:23.902225    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:23.902235    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:23.915513    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:23.915523    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:23.920227    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:23.920235    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:23.934292    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:23.934301    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:23.948407    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:23.948417    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:23.965377    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:23.965386    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:23.976897    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:23.976912    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:26.490498    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:31.492750    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:31.493333    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:31.531498    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:31.531685    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:31.558524    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:31.558657    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:31.576195    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:31.576289    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:31.587834    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:31.587930    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:31.598535    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:31.598610    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:31.609801    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:31.609886    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:31.620577    9103 logs.go:276] 0 containers: []
	W0923 03:35:31.620587    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:31.620662    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:31.633275    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:31.633302    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:31.633309    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:31.646513    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:31.646523    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:31.682440    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:31.682456    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:31.697139    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:31.697148    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:31.711490    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:31.711500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:31.734963    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:31.734973    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:31.749442    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:31.749453    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:31.760737    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:31.760749    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:31.800896    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:31.800904    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:31.818288    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:31.818300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:31.830036    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:31.830046    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:31.844857    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:31.844870    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:31.856412    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:31.856424    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:31.881299    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:31.881308    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:31.885215    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:31.885224    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:31.900469    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:31.900479    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:34.413425    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:39.415597    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:39.415718    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:39.426820    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:39.426907    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:39.436926    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:39.437008    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:39.447664    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:39.447744    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:39.458140    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:39.458229    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:39.468547    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:39.468624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:39.479109    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:39.479193    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:39.489615    9103 logs.go:276] 0 containers: []
	W0923 03:35:39.489627    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:39.489696    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:39.500292    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:39.500309    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:39.500314    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:39.511219    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:39.511228    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:39.522689    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:39.522704    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:39.527630    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:39.527637    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:39.539901    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:39.539911    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:39.578222    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:39.578234    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:39.594083    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:39.594094    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:39.607472    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:39.607482    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:39.631777    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:39.631789    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:39.643755    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:39.643768    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:39.667757    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:39.667767    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:39.706283    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:39.706291    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:39.720266    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:39.720276    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:39.742599    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:39.742610    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:39.756963    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:39.756975    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:39.772816    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:39.772824    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:42.288668    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:47.290944    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:47.291509    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:47.331069    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:47.331230    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:47.352554    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:47.352732    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:47.369055    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:47.369143    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:47.381965    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:47.382050    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:47.392762    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:47.392845    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:47.403759    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:47.403830    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:47.418714    9103 logs.go:276] 0 containers: []
	W0923 03:35:47.418727    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:47.418804    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:47.432252    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:47.432270    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:47.432276    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:47.468347    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:47.468357    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:47.488966    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:47.488975    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:47.529095    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:47.529104    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:47.540906    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:47.540918    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:47.552301    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:47.552314    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:47.575860    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:47.575871    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:47.587582    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:47.587593    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:47.612256    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:47.612262    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:47.627041    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:47.627054    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:47.642919    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:47.642930    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:47.661114    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:47.661126    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:47.678848    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:47.678859    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:47.683717    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:47.683725    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:47.694532    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:47.694543    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:47.712299    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:47.712311    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:50.226089    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:55.228180    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:55.228330    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:55.239505    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:55.239590    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:55.250031    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:55.250118    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:55.260334    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:55.260419    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:55.271369    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:55.271455    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:55.281699    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:55.281781    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:55.292010    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:55.292094    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:55.302543    9103 logs.go:276] 0 containers: []
	W0923 03:35:55.302558    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:55.302624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:55.313615    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:55.313640    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:55.313644    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:55.353612    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:55.353629    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:55.365671    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:55.365683    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:55.383518    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:55.383529    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:55.388121    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:55.388128    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:55.402346    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:55.402356    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:55.419205    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:55.419220    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:55.434421    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:55.434432    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:55.452055    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:55.452069    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:55.494477    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:55.494489    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:55.508981    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:55.508993    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:55.522138    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:55.522153    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:55.533949    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:55.533964    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:55.559302    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:55.559312    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:55.581224    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:55.581238    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:55.593318    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:55.593329    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:58.107398    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:03.109658    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:03.109947    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:03.139733    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:03.139897    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:03.157584    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:03.157691    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:03.170671    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:03.170758    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:03.182073    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:03.182162    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:03.192019    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:03.192096    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:03.202682    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:03.202761    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:03.212708    9103 logs.go:276] 0 containers: []
	W0923 03:36:03.212718    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:03.212794    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:03.225856    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:03.225877    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:03.225890    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:03.251258    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:03.251266    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:03.266282    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:03.266293    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:03.278192    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:03.278202    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:03.292171    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:03.292183    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:03.308974    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:03.308984    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:03.320325    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:03.320339    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:03.334083    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:03.334096    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:03.347685    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:03.347699    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:03.383611    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:03.383627    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:03.421701    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:03.421713    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:03.426112    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:03.426117    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:03.437178    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:03.437190    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:03.448830    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:03.448840    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:03.460246    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:03.460254    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:03.479853    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:03.479871    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:06.005820    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:11.006670    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:11.006771    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:11.019288    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:11.019377    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:11.031249    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:11.031337    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:11.045889    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:11.045976    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:11.058059    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:11.058151    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:11.070894    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:11.070986    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:11.084352    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:11.084446    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:11.096148    9103 logs.go:276] 0 containers: []
	W0923 03:36:11.096161    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:11.096238    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:11.107890    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:11.107908    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:11.107914    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:11.123251    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:11.123263    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:11.135669    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:11.135682    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:11.150710    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:11.150722    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:11.170234    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:11.170248    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:11.183309    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:11.183322    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:11.187905    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:11.187917    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:11.213549    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:11.213564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:11.232838    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:11.232849    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:11.255298    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:11.255316    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:11.299035    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:11.299051    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:11.316526    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:11.316540    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:11.330252    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:11.330263    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:11.369469    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:11.369483    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:11.388796    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:11.388807    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:11.401877    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:11.401889    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:13.930999    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:18.933147    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:18.933365    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:18.948055    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:18.948170    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:18.962306    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:18.962400    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:18.974145    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:18.974229    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:18.985471    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:18.985556    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:19.004122    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:19.004198    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:19.015350    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:19.015437    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:19.031778    9103 logs.go:276] 0 containers: []
	W0923 03:36:19.031791    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:19.031866    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:19.043237    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:19.043259    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:19.043266    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:19.055616    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:19.055628    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:19.076519    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:19.076530    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:19.094931    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:19.094943    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:19.110917    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:19.110929    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:19.123113    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:19.123124    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:19.165612    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:19.165627    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:19.203852    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:19.203864    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:19.208456    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:19.208468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:19.225481    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:19.225498    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:19.239160    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:19.239173    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:19.257625    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:19.257643    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:19.270190    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:19.270201    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:19.296018    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:19.296028    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:19.307882    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:19.307895    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:19.322225    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:19.322236    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:21.840564    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:26.842727    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:26.842895    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:26.856913    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:26.857017    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:26.868014    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:26.868103    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:26.878404    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:26.878480    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:26.889064    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:26.889154    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:26.900740    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:26.900812    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:26.919836    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:26.919918    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:26.929748    9103 logs.go:276] 0 containers: []
	W0923 03:36:26.929758    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:26.929818    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:26.945075    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:26.945093    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:26.945099    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:26.958672    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:26.958685    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:26.975109    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:26.975122    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:26.987442    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:26.987452    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:26.999378    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:26.999389    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:27.010928    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:27.010941    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:27.015135    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:27.015144    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:27.029121    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:27.029130    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:27.049470    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:27.049483    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:27.062016    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:27.062027    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:27.096896    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:27.096911    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:27.113965    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:27.113978    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:27.130014    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:27.130027    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:27.141751    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:27.141762    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:27.164939    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:27.164948    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:27.204299    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:27.204311    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:29.729771    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:34.731891    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:34.732009    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:34.742910    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:34.742992    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:34.754372    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:34.754451    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:34.765334    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:34.765412    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:34.775847    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:34.775918    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:34.789165    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:34.789245    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:34.799708    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:34.799788    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:34.809634    9103 logs.go:276] 0 containers: []
	W0923 03:36:34.809645    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:34.809711    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:34.824263    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:34.824280    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:34.824286    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:34.828727    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:34.828736    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:34.842686    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:34.842696    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:34.855372    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:34.855383    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:34.910980    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:34.910995    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:34.931056    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:34.931066    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:34.949514    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:34.949528    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:34.961082    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:34.961092    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:34.972257    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:34.972267    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:35.011284    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:35.011300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:35.025988    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:35.025999    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:35.040980    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:35.040993    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:35.052223    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:35.052233    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:35.063940    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:35.063950    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:35.082561    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:35.082571    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:35.099889    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:35.099899    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:37.625687    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:42.627892    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:42.627971    9103 kubeadm.go:597] duration metric: took 4m4.027797917s to restartPrimaryControlPlane
	W0923 03:36:42.628026    9103 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 03:36:42.628047    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 03:36:43.604367    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 03:36:43.609294    9103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:36:43.612087    9103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:36:43.614835    9103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:36:43.614841    9103 kubeadm.go:157] found existing configuration files:
	
	I0923 03:36:43.614870    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf
	I0923 03:36:43.617964    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:36:43.617990    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:36:43.621405    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf
	I0923 03:36:43.624044    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:36:43.624073    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:36:43.626578    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf
	I0923 03:36:43.629704    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:36:43.629729    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:36:43.633000    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf
	I0923 03:36:43.635513    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:36:43.635538    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:36:43.638200    9103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 03:36:43.657309    9103 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 03:36:43.657352    9103 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 03:36:43.706954    9103 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 03:36:43.707012    9103 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 03:36:43.707061    9103 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 03:36:43.761350    9103 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 03:36:43.765558    9103 out.go:235]   - Generating certificates and keys ...
	I0923 03:36:43.765594    9103 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 03:36:43.765627    9103 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 03:36:43.765671    9103 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 03:36:43.765705    9103 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 03:36:43.765742    9103 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 03:36:43.765771    9103 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 03:36:43.765832    9103 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 03:36:43.765867    9103 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 03:36:43.765931    9103 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 03:36:43.765971    9103 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 03:36:43.765987    9103 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 03:36:43.766018    9103 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 03:36:43.849251    9103 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 03:36:44.033484    9103 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 03:36:44.075720    9103 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 03:36:44.146643    9103 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 03:36:44.174318    9103 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 03:36:44.174739    9103 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 03:36:44.174854    9103 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 03:36:44.259521    9103 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 03:36:44.267613    9103 out.go:235]   - Booting up control plane ...
	I0923 03:36:44.267725    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 03:36:44.267777    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 03:36:44.267887    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 03:36:44.267932    9103 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 03:36:44.268022    9103 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 03:36:48.269635    9103 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003058 seconds
	I0923 03:36:48.269693    9103 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 03:36:48.273064    9103 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 03:36:48.784470    9103 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 03:36:48.784666    9103 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-515000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 03:36:49.288053    9103 kubeadm.go:310] [bootstrap-token] Using token: l9acn1.amb07pew0jrfe2vi
	I0923 03:36:49.294427    9103 out.go:235]   - Configuring RBAC rules ...
	I0923 03:36:49.294492    9103 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 03:36:49.294545    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 03:36:49.296579    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 03:36:49.297928    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 03:36:49.298793    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 03:36:49.299665    9103 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 03:36:49.303103    9103 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 03:36:49.477770    9103 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 03:36:49.691979    9103 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 03:36:49.692474    9103 kubeadm.go:310] 
	I0923 03:36:49.692508    9103 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 03:36:49.692511    9103 kubeadm.go:310] 
	I0923 03:36:49.692564    9103 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 03:36:49.692573    9103 kubeadm.go:310] 
	I0923 03:36:49.692590    9103 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 03:36:49.692617    9103 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 03:36:49.692642    9103 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 03:36:49.692644    9103 kubeadm.go:310] 
	I0923 03:36:49.692672    9103 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 03:36:49.692676    9103 kubeadm.go:310] 
	I0923 03:36:49.692701    9103 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 03:36:49.692703    9103 kubeadm.go:310] 
	I0923 03:36:49.692729    9103 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 03:36:49.692773    9103 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 03:36:49.692816    9103 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 03:36:49.692820    9103 kubeadm.go:310] 
	I0923 03:36:49.692865    9103 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 03:36:49.692914    9103 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 03:36:49.692916    9103 kubeadm.go:310] 
	I0923 03:36:49.692958    9103 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l9acn1.amb07pew0jrfe2vi \
	I0923 03:36:49.693012    9103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f \
	I0923 03:36:49.693028    9103 kubeadm.go:310] 	--control-plane 
	I0923 03:36:49.693031    9103 kubeadm.go:310] 
	I0923 03:36:49.693090    9103 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 03:36:49.693096    9103 kubeadm.go:310] 
	I0923 03:36:49.693155    9103 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l9acn1.amb07pew0jrfe2vi \
	I0923 03:36:49.693215    9103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f 
	I0923 03:36:49.693272    9103 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 03:36:49.693279    9103 cni.go:84] Creating CNI manager for ""
	I0923 03:36:49.693288    9103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:36:49.696740    9103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 03:36:49.703849    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 03:36:49.706762    9103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 03:36:49.711498    9103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 03:36:49.711545    9103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 03:36:49.711576    9103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-515000 minikube.k8s.io/updated_at=2024_09_23T03_36_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=running-upgrade-515000 minikube.k8s.io/primary=true
	I0923 03:36:49.756157    9103 kubeadm.go:1113] duration metric: took 44.651083ms to wait for elevateKubeSystemPrivileges
	I0923 03:36:49.756182    9103 ops.go:34] apiserver oom_adj: -16
	I0923 03:36:49.756228    9103 kubeadm.go:394] duration metric: took 4m11.170342708s to StartCluster
	I0923 03:36:49.756240    9103 settings.go:142] acquiring lock: {Name:mk179b7e7e669ed9fc071f7eb5301e91538a634e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:49.756401    9103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:36:49.756810    9103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:49.757008    9103 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:36:49.757016    9103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 03:36:49.757085    9103 config.go:182] Loaded profile config "running-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:36:49.757114    9103 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-515000"
	I0923 03:36:49.757090    9103 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-515000"
	I0923 03:36:49.757130    9103 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-515000"
	W0923 03:36:49.757135    9103 addons.go:243] addon storage-provisioner should already be in state true
	I0923 03:36:49.757149    9103 host.go:66] Checking if "running-upgrade-515000" exists ...
	I0923 03:36:49.757121    9103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-515000"
	I0923 03:36:49.757997    9103 kapi.go:59] client config for running-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c06030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:36:49.758126    9103 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-515000"
	W0923 03:36:49.758130    9103 addons.go:243] addon default-storageclass should already be in state true
	I0923 03:36:49.758137    9103 host.go:66] Checking if "running-upgrade-515000" exists ...
	I0923 03:36:49.761781    9103 out.go:177] * Verifying Kubernetes components...
	I0923 03:36:49.762141    9103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 03:36:49.765884    9103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 03:36:49.765891    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:36:49.769743    9103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:49.773674    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:49.776705    9103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:36:49.776711    9103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 03:36:49.776716    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:36:49.872302    9103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:36:49.877022    9103 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:36:49.877080    9103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:49.881221    9103 api_server.go:72] duration metric: took 124.205083ms to wait for apiserver process to appear ...
	I0923 03:36:49.881228    9103 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:36:49.881235    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:49.914238    9103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 03:36:49.940286    9103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:36:50.224439    9103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 03:36:50.224451    9103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 03:36:54.883190    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:54.883220    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:59.883411    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:59.883455    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:04.883693    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:04.883730    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:09.884059    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:09.884084    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:14.884585    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:14.884644    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:19.884793    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:19.884832    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 03:37:20.226232    9103 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 03:37:20.229636    9103 out.go:177] * Enabled addons: storage-provisioner
	I0923 03:37:20.242574    9103 addons.go:510] duration metric: took 30.486225625s for enable addons: enabled=[storage-provisioner]
	I0923 03:37:24.885608    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:24.885655    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:29.886590    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:29.886613    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:34.887751    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:34.887778    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:39.889264    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:39.889308    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:44.891231    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:44.891288    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:49.893564    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:49.893739    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:49.919876    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:37:49.920077    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:49.934400    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:37:49.934468    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:49.944794    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:37:49.944860    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:49.955193    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:37:49.955263    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:49.965790    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:37:49.965862    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:49.980902    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:37:49.980966    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:49.990910    9103 logs.go:276] 0 containers: []
	W0923 03:37:49.990920    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:49.990980    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:50.001374    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:37:50.001387    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:37:50.001392    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:37:50.018429    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:37:50.018438    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:50.029841    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:37:50.029851    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:37:50.044599    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:37:50.044610    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:37:50.056256    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:37:50.056270    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:37:50.075953    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:37:50.075963    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:37:50.090686    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:37:50.090699    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:37:50.102262    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:37:50.102281    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:37:50.114538    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:50.114548    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:50.138536    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:50.138545    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:50.175109    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:50.175118    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:50.179394    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:50.179400    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:50.216822    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:37:50.216834    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:37:52.733596    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:57.733960    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:57.734172    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:57.749932    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:37:57.750042    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:57.761538    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:37:57.761623    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:57.772946    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:37:57.773030    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:57.786250    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:37:57.786336    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:57.799460    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:37:57.799540    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:57.809919    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:37:57.809998    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:57.820042    9103 logs.go:276] 0 containers: []
	W0923 03:37:57.820053    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:57.820127    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:57.832215    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:37:57.832229    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:57.832234    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:57.836654    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:37:57.836663    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:37:57.849257    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:37:57.849274    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:37:57.864126    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:37:57.864141    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:37:57.881124    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:37:57.881138    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:37:57.892420    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:57.892434    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:57.917789    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:57.917796    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:57.955244    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:57.955251    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:57.991693    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:37:57.991705    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:37:58.006201    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:37:58.006212    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:37:58.020600    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:37:58.020613    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:37:58.031887    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:37:58.031902    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:37:58.043312    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:37:58.043321    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:00.558698    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:05.559504    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:05.559834    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:05.585630    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:05.585773    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:05.600780    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:05.600877    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:05.613783    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:05.613861    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:05.624561    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:05.624632    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:05.634851    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:05.634919    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:05.651324    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:05.651409    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:05.662135    9103 logs.go:276] 0 containers: []
	W0923 03:38:05.662149    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:05.662219    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:05.672486    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:05.672500    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:05.672506    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:05.696665    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:05.696676    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:05.709675    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:05.709686    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:05.721754    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:05.721766    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:05.733501    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:05.733517    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:05.768230    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:05.768245    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:05.782481    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:05.782491    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:05.796553    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:05.796564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:05.808795    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:05.808807    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:05.820668    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:05.820678    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:05.835497    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:05.835507    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:05.873344    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:05.873352    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:05.878196    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:05.878202    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:08.397587    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:13.399698    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:13.399863    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:13.419338    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:13.419441    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:13.433697    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:13.433791    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:13.445522    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:13.445607    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:13.456346    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:13.456433    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:13.466314    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:13.466390    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:13.476451    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:13.476534    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:13.486818    9103 logs.go:276] 0 containers: []
	W0923 03:38:13.486831    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:13.486904    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:13.497152    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:13.497167    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:13.497173    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:13.509886    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:13.509896    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:13.523171    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:13.523185    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:13.534612    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:13.534624    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:13.552054    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:13.552067    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:13.568989    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:13.569002    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:13.606987    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:13.606994    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:13.611257    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:13.611265    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:13.646176    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:13.646188    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:13.671020    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:13.671029    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:13.681891    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:13.681903    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:13.695940    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:13.695954    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:13.709624    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:13.709635    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:16.226442    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:21.228604    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:21.228854    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:21.249612    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:21.249725    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:21.264312    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:21.264404    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:21.277102    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:21.277189    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:21.287661    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:21.287740    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:21.298461    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:21.298545    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:21.309432    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:21.309511    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:21.319308    9103 logs.go:276] 0 containers: []
	W0923 03:38:21.319320    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:21.319385    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:21.329793    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:21.329806    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:21.329811    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:21.341523    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:21.341533    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:21.352914    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:21.352926    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:21.371140    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:21.371151    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:21.382512    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:21.382526    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:21.405775    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:21.405784    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:21.417939    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:21.417949    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:21.455567    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:21.455578    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:21.492912    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:21.492927    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:21.507101    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:21.507110    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:21.523798    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:21.523809    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:21.535130    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:21.535140    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:21.549634    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:21.549648    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:24.056383    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:29.058757    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:29.059188    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:29.089406    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:29.089569    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:29.109316    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:29.109430    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:29.124103    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:29.124193    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:29.135898    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:29.135985    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:29.146738    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:29.146814    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:29.158076    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:29.158148    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:29.167761    9103 logs.go:276] 0 containers: []
	W0923 03:38:29.167774    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:29.167843    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:29.178142    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:29.178153    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:29.178158    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:29.215020    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:29.215032    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:29.229664    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:29.229673    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:29.244827    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:29.244838    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:29.259574    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:29.259584    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:29.277166    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:29.277176    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:29.288825    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:29.288838    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:29.314057    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:29.314074    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:29.318383    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:29.318392    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:29.353511    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:29.353522    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:29.367706    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:29.367715    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:29.379280    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:29.379293    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:29.393966    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:29.393977    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:31.907585    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:36.909799    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:36.909984    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:36.924576    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:36.924675    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:36.935873    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:36.935953    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:36.946112    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:36.946198    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:36.956451    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:36.956535    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:36.966909    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:36.966981    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:36.977238    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:36.977311    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:36.987588    9103 logs.go:276] 0 containers: []
	W0923 03:38:36.987599    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:36.987662    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:36.997741    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:36.997755    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:36.997760    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:37.008783    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:37.008795    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:37.033668    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:37.033675    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:37.071781    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:37.071791    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:37.085690    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:37.085703    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:37.097898    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:37.097910    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:37.112518    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:37.112528    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:37.130687    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:37.130702    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:37.142101    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:37.142119    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:37.146529    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:37.146537    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:37.181633    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:37.181648    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:37.198373    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:37.198384    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:37.215317    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:37.215327    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:39.734772    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:44.737026    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:44.737502    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:44.767256    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:44.767377    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:44.783522    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:44.783615    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:44.797129    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:44.797207    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:44.808458    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:44.808539    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:44.818792    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:44.818866    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:44.829084    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:44.829156    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:44.839633    9103 logs.go:276] 0 containers: []
	W0923 03:38:44.839644    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:44.839707    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:44.850113    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:44.850128    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:44.850134    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:44.891286    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:44.891300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:44.905713    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:44.905725    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:44.922272    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:44.922287    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:44.933955    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:44.933965    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:44.945531    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:44.945544    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:44.969006    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:44.969013    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:44.980205    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:44.980219    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:45.016918    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:45.016928    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:45.021574    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:45.021579    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:45.035760    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:45.035771    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:45.047402    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:45.047417    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:45.067621    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:45.067632    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:47.586674    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:52.588773    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:52.588941    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:52.602479    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:52.602556    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:52.612600    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:52.612686    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:52.623208    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:38:52.623281    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:52.633561    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:52.633647    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:52.644503    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:52.644595    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:52.655872    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:52.655956    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:52.665703    9103 logs.go:276] 0 containers: []
	W0923 03:38:52.665719    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:52.665798    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:52.676277    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:52.676294    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:52.676300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:52.687793    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:52.687803    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:52.699073    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:52.699083    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:52.736355    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:52.736363    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:52.750792    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:52.750802    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:52.765060    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:52.765072    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:52.769659    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:52.769668    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:52.803274    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:52.803288    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:52.817757    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:52.817773    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:52.829474    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:52.829484    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:52.844384    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:52.844396    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:52.861876    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:52.861886    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:52.873552    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:38:52.873564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:38:52.884734    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:38:52.884749    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:38:52.896029    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:52.896042    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:55.421973    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:00.424209    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:00.424438    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:00.445697    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:00.445822    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:00.460899    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:00.460990    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:00.473585    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:00.473674    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:00.485149    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:00.485237    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:00.496539    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:00.496624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:00.506875    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:00.506956    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:00.516813    9103 logs.go:276] 0 containers: []
	W0923 03:39:00.516830    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:00.516898    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:00.527008    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:00.527027    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:00.527033    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:00.562647    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:00.562658    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:00.574284    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:00.574295    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:00.586127    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:00.586142    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:00.600228    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:00.600239    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:00.612609    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:00.612623    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:00.624071    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:00.624083    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:00.636068    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:00.636080    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:00.661721    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:00.661728    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:00.700207    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:00.700216    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:00.704943    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:00.704954    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:00.718980    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:00.718990    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:00.730688    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:00.730700    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:00.745919    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:00.745931    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:00.765872    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:00.765882    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:03.281247    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:08.281815    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:08.282063    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:08.304314    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:08.304431    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:08.322241    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:08.322333    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:08.337411    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:08.337485    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:08.348810    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:08.348891    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:08.359075    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:08.359148    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:08.369031    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:08.369108    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:08.379014    9103 logs.go:276] 0 containers: []
	W0923 03:39:08.379026    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:08.379092    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:08.405794    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:08.405812    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:08.405820    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:08.430715    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:08.430730    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:08.455853    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:08.455862    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:08.470552    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:08.470562    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:08.482264    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:08.482274    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:08.493787    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:08.493798    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:08.506545    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:08.506555    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:08.546547    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:08.546561    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:08.560784    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:08.560794    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:08.565349    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:08.565357    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:08.577065    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:08.577077    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:08.588927    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:08.588939    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:08.600430    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:08.600441    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:08.615502    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:08.615511    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:08.651182    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:08.651191    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:11.167720    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:16.170192    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:16.170468    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:16.197351    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:16.197479    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:16.212604    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:16.212702    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:16.224500    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:16.224590    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:16.235783    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:16.235860    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:16.246448    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:16.246544    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:16.258590    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:16.258679    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:16.272392    9103 logs.go:276] 0 containers: []
	W0923 03:39:16.272406    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:16.272476    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:16.282833    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:16.282851    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:16.282857    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:16.288036    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:16.288047    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:16.302332    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:16.302343    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:16.314295    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:16.314305    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:16.326348    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:16.326365    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:16.350359    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:16.350368    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:16.362467    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:16.362476    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:16.374087    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:16.374097    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:16.389494    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:16.389505    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:16.400825    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:16.400840    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:16.415675    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:16.415691    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:16.427293    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:16.427304    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:16.439652    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:16.439666    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:16.476190    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:16.476204    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:16.511488    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:16.511500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:19.031930    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:24.034161    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:24.034364    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:24.055843    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:24.055964    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:24.078727    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:24.078820    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:24.089847    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:24.089930    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:24.100178    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:24.100263    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:24.113611    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:24.113694    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:24.124229    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:24.124300    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:24.138234    9103 logs.go:276] 0 containers: []
	W0923 03:39:24.138245    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:24.138319    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:24.148608    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:24.148623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:24.148630    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:24.160398    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:24.160408    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:24.172241    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:24.172262    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:24.183830    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:24.183840    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:24.223457    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:24.223468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:24.235832    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:24.235844    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:24.273662    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:24.273674    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:24.285673    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:24.285685    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:24.297565    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:24.297577    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:24.322775    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:24.322786    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:24.334591    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:24.334601    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:24.339452    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:24.339459    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:24.354146    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:24.354160    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:24.368488    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:24.368500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:24.387407    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:24.387416    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:26.907268    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:31.909490    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:31.909673    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:31.927213    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:31.927317    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:31.940320    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:31.940408    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:31.957552    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:31.957640    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:31.967592    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:31.967678    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:31.978649    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:31.978729    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:31.989429    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:31.989509    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:31.999250    9103 logs.go:276] 0 containers: []
	W0923 03:39:31.999263    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:31.999329    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:32.009657    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:32.009677    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:32.009684    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:32.023455    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:32.023468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:32.037516    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:32.037531    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:32.049534    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:32.049546    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:32.061558    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:32.061572    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:32.079972    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:32.079983    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:32.091638    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:32.091649    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:32.130349    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:32.130361    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:32.150850    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:32.150860    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:32.163257    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:32.163269    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:32.168197    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:32.168204    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:32.204717    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:32.204732    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:32.221894    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:32.221907    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:32.238174    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:32.238187    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:32.250215    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:32.250228    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:34.777400    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:39.779697    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:39.779869    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:39.797737    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:39.797848    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:39.814148    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:39.814223    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:39.831129    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:39.831199    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:39.842053    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:39.842134    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:39.856371    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:39.856448    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:39.866551    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:39.866624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:39.876926    9103 logs.go:276] 0 containers: []
	W0923 03:39:39.876941    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:39.877012    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:39.887206    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:39.887224    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:39.887230    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:39.926038    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:39.926049    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:39.940154    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:39.940170    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:39.951763    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:39.951774    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:39.963540    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:39.963551    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:39.975261    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:39.975271    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:40.010914    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:40.010925    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:40.025816    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:40.025831    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:40.037710    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:40.037720    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:40.056955    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:40.056970    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:40.068924    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:40.068933    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:40.093468    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:40.093476    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:40.097632    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:40.097639    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:40.109295    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:40.109308    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:40.121166    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:40.121180    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:42.640769    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:47.643329    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:47.643552    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:47.658823    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:47.658915    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:47.670173    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:47.670260    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:47.680881    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:47.680962    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:47.693033    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:47.693116    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:47.703925    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:47.704005    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:47.714505    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:47.714596    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:47.735830    9103 logs.go:276] 0 containers: []
	W0923 03:39:47.735842    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:47.735916    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:47.746370    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:47.746388    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:47.746394    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:47.760876    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:47.760887    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:47.801443    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:47.801459    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:47.806458    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:47.806464    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:47.828416    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:47.828427    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:47.850525    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:47.850538    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:47.862097    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:47.862108    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:47.880398    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:47.880409    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:47.892373    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:47.892383    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:47.904630    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:47.904643    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:47.916623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:47.916635    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:47.928314    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:47.928327    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:47.948365    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:47.948374    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:47.971764    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:47.971771    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:48.031862    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:48.031898    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:50.548531    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:55.550861    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:55.551072    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:55.575859    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:55.576002    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:55.592670    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:55.592766    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:55.605750    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:55.605843    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:55.617104    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:55.617186    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:55.627405    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:55.627489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:55.642092    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:55.642174    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:55.654045    9103 logs.go:276] 0 containers: []
	W0923 03:39:55.654056    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:55.654124    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:55.664834    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:55.664850    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:55.664855    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:55.701425    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:55.701434    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:55.705796    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:55.705806    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:55.719490    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:55.719503    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:55.734471    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:55.734484    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:55.751871    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:55.751883    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:55.775463    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:55.775476    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:55.810717    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:55.810733    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:55.825567    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:55.825582    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:55.837264    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:55.837279    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:55.848518    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:55.848533    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:55.859959    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:55.859974    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:55.874237    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:55.874247    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:55.886460    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:55.886475    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:55.898639    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:55.898654    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:58.415547    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:03.417222    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:03.417391    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:03.432438    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:03.432538    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:03.444181    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:03.444267    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:03.454927    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:03.455006    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:03.465606    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:03.465680    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:03.476070    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:03.476138    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:03.486408    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:03.486476    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:03.500826    9103 logs.go:276] 0 containers: []
	W0923 03:40:03.500836    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:03.500899    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:03.511542    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:03.511560    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:03.511566    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:03.531312    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:03.531323    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:03.549717    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:03.549729    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:03.561871    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:03.561881    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:03.573050    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:03.573062    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:03.593277    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:03.593289    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:03.605194    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:03.605205    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:03.622455    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:03.622469    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:03.633796    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:03.633808    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:03.645798    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:03.645810    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:03.682283    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:03.682294    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:03.716990    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:03.717001    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:03.730664    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:03.730673    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:03.735165    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:03.735172    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:03.747264    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:03.747278    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:06.274123    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:11.274608    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:11.275019    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:11.307924    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:11.308078    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:11.328365    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:11.328495    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:11.342714    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:11.342805    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:11.354723    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:11.354796    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:11.365895    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:11.365973    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:11.376046    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:11.376122    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:11.386296    9103 logs.go:276] 0 containers: []
	W0923 03:40:11.386312    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:11.386372    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:11.396681    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:11.396699    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:11.396704    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:11.410475    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:11.410486    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:11.422261    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:11.422277    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:11.433847    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:11.433856    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:11.445875    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:11.445886    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:11.458483    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:11.458500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:11.472709    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:11.472717    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:11.497025    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:11.497036    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:11.536493    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:11.536502    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:11.548744    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:11.548758    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:11.560288    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:11.560304    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:11.577912    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:11.577925    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:11.616191    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:11.616199    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:11.620408    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:11.620413    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:11.635996    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:11.636007    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:14.149603    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:19.151724    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:19.151838    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:19.163161    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:19.163246    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:19.174542    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:19.174624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:19.186427    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:19.186519    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:19.208627    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:19.208713    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:19.220434    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:19.220511    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:19.231433    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:19.231518    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:19.243250    9103 logs.go:276] 0 containers: []
	W0923 03:40:19.243262    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:19.243331    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:19.257761    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:19.257781    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:19.257787    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:19.298047    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:19.298058    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:19.310908    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:19.310921    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:19.327706    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:19.327722    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:19.341084    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:19.341094    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:19.345866    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:19.345874    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:19.359944    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:19.359960    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:19.372623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:19.372638    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:19.384845    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:19.384857    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:19.403010    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:19.403022    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:19.415841    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:19.415852    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:19.440973    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:19.440983    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:19.479612    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:19.479624    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:19.494046    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:19.494056    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:19.506295    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:19.506308    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:22.024018    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:27.026112    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:27.026257    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:27.039480    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:27.039563    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:27.050107    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:27.050188    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:27.060613    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:27.060688    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:27.071589    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:27.071671    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:27.082056    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:27.082132    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:27.092348    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:27.092432    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:27.102760    9103 logs.go:276] 0 containers: []
	W0923 03:40:27.102770    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:27.102831    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:27.113445    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:27.113464    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:27.113469    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:27.128122    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:27.128134    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:27.141781    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:27.141791    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:27.153375    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:27.153389    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:27.190711    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:27.190720    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:27.195706    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:27.195714    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:27.207322    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:27.207336    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:27.218926    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:27.218936    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:27.243310    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:27.243318    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:27.254784    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:27.254799    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:27.291186    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:27.291200    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:27.303380    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:27.303393    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:27.314833    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:27.314847    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:27.329840    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:27.329853    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:27.341732    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:27.341746    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:29.861465    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:34.863448    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:34.863573    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:34.878130    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:34.878214    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:34.888406    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:34.888489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:34.899096    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:34.899181    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:34.911700    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:34.911768    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:34.925779    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:34.925856    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:34.936098    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:34.936179    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:34.950858    9103 logs.go:276] 0 containers: []
	W0923 03:40:34.950875    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:34.950943    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:34.961243    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:34.961262    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:34.961269    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:34.966389    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:34.966397    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:35.001306    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:35.001317    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:35.014212    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:35.014224    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:35.026079    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:35.026094    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:35.062625    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:35.062632    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:35.077151    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:35.077162    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:35.088530    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:35.088546    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:35.100656    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:35.100669    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:35.125422    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:35.125432    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:35.141304    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:35.141315    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:35.153418    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:35.153429    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:35.168086    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:35.168096    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:35.180419    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:35.180435    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:35.199026    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:35.199037    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:37.712875    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:42.714973    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:42.715246    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:42.731857    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:42.731957    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:42.744659    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:42.744741    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:42.755876    9103 logs.go:276] 4 containers: [1aef8dd622dc cfb21961ef92 0752b17b0c08 41cccde2068e]
	I0923 03:40:42.755961    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:42.766382    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:42.766455    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:42.776556    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:42.776637    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:42.786975    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:42.787053    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:42.796726    9103 logs.go:276] 0 containers: []
	W0923 03:40:42.796737    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:42.796803    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:42.807645    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:42.807666    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:42.807671    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:42.819740    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:42.819750    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:42.842496    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:42.842506    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:42.854007    9103 logs.go:123] Gathering logs for coredns [1aef8dd622dc] ...
	I0923 03:40:42.854017    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aef8dd622dc"
	I0923 03:40:42.865365    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:42.865378    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:42.870331    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:42.870337    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:42.905028    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:42.905040    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:42.919468    9103 logs.go:123] Gathering logs for coredns [cfb21961ef92] ...
	I0923 03:40:42.919481    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb21961ef92"
	I0923 03:40:42.930783    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:42.930797    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:42.942342    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:42.942357    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:42.980410    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:42.980418    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:42.992196    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:42.992210    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:43.010602    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:43.010617    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:43.022842    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:43.022852    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:43.038009    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:43.038022    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:45.563837    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:50.564884    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:50.570480    9103 out.go:201] 
	W0923 03:40:50.575473    9103 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 03:40:50.575485    9103 out.go:270] * 
	* 
	W0923 03:40:50.576315    9103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:40:50.590400    9103 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-23 03:40:50.680743 -0700 PDT m=+1305.516744834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-515000 -n running-upgrade-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-515000 -n running-upgrade-515000: exit status 2 (15.781627125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-515000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-635000          | force-systemd-flag-635000 | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-958000              | force-systemd-env-958000  | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-958000           | force-systemd-env-958000  | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT | 23 Sep 24 03:31 PDT |
	| start   | -p docker-flags-506000                | docker-flags-506000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-635000             | force-systemd-flag-635000 | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-635000          | force-systemd-flag-635000 | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT | 23 Sep 24 03:31 PDT |
	| start   | -p cert-expiration-413000             | cert-expiration-413000    | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-506000 ssh               | docker-flags-506000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-506000 ssh               | docker-flags-506000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-506000                | docker-flags-506000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT | 23 Sep 24 03:31 PDT |
	| start   | -p cert-options-903000                | cert-options-903000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-903000 ssh               | cert-options-903000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-903000 -- sudo        | cert-options-903000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-903000                | cert-options-903000       | jenkins | v1.34.0 | 23 Sep 24 03:31 PDT | 23 Sep 24 03:31 PDT |
	| start   | -p running-upgrade-515000             | minikube                  | jenkins | v1.26.0 | 23 Sep 24 03:31 PDT | 23 Sep 24 03:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-515000             | running-upgrade-515000    | jenkins | v1.34.0 | 23 Sep 24 03:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-413000             | cert-expiration-413000    | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-413000             | cert-expiration-413000    | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT | 23 Sep 24 03:34 PDT |
	| start   | -p kubernetes-upgrade-915000          | kubernetes-upgrade-915000 | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-915000          | kubernetes-upgrade-915000 | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT | 23 Sep 24 03:34 PDT |
	| start   | -p kubernetes-upgrade-915000          | kubernetes-upgrade-915000 | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-915000          | kubernetes-upgrade-915000 | jenkins | v1.34.0 | 23 Sep 24 03:34 PDT | 23 Sep 24 03:34 PDT |
	| start   | -p stopped-upgrade-516000             | minikube                  | jenkins | v1.26.0 | 23 Sep 24 03:34 PDT | 23 Sep 24 03:35 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-516000 stop           | minikube                  | jenkins | v1.26.0 | 23 Sep 24 03:35 PDT | 23 Sep 24 03:35 PDT |
	| start   | -p stopped-upgrade-516000             | stopped-upgrade-516000    | jenkins | v1.34.0 | 23 Sep 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 03:35:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 03:35:43.015087    9267 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:35:43.015264    9267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:35:43.015268    9267 out.go:358] Setting ErrFile to fd 2...
	I0923 03:35:43.015271    9267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:35:43.015443    9267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:35:43.016802    9267 out.go:352] Setting JSON to false
	I0923 03:35:43.036618    9267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5714,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:35:43.036691    9267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:35:43.041751    9267 out.go:177] * [stopped-upgrade-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:35:43.048736    9267 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:35:43.048796    9267 notify.go:220] Checking for updates...
	I0923 03:35:43.056578    9267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:35:43.060762    9267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:35:43.063713    9267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:35:43.066756    9267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:35:43.069710    9267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:35:43.073997    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:35:43.076664    9267 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 03:35:43.079735    9267 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:35:43.082706    9267 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:35:43.089731    9267 start.go:297] selected driver: qemu2
	I0923 03:35:43.089736    9267 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:35:43.089784    9267 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:35:43.092314    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:35:43.092343    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:35:43.092379    9267 start.go:340] cluster config:
	{Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:35:43.092441    9267 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:35:43.100704    9267 out.go:177] * Starting "stopped-upgrade-516000" primary control-plane node in "stopped-upgrade-516000" cluster
	I0923 03:35:43.104690    9267 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:35:43.104704    9267 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 03:35:43.104712    9267 cache.go:56] Caching tarball of preloaded images
	I0923 03:35:43.104763    9267 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:35:43.104768    9267 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 03:35:43.104815    9267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/config.json ...
	I0923 03:35:43.105233    9267 start.go:360] acquireMachinesLock for stopped-upgrade-516000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:35:43.105268    9267 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "stopped-upgrade-516000"
	I0923 03:35:43.105276    9267 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:35:43.105281    9267 fix.go:54] fixHost starting: 
	I0923 03:35:43.105383    9267 fix.go:112] recreateIfNeeded on stopped-upgrade-516000: state=Stopped err=<nil>
	W0923 03:35:43.105391    9267 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:35:43.113729    9267 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-516000" ...
	I0923 03:35:39.415597    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:39.415718    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:39.426820    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:39.426907    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:39.436926    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:39.437008    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:39.447664    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:39.447744    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:39.458140    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:39.458229    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:39.468547    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:39.468624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:39.479109    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:39.479193    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:39.489615    9103 logs.go:276] 0 containers: []
	W0923 03:35:39.489627    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:39.489696    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:39.500292    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:39.500309    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:39.500314    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:39.511219    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:39.511228    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:39.522689    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:39.522704    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:39.527630    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:39.527637    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:39.539901    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:39.539911    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:39.578222    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:39.578234    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:39.594083    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:39.594094    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:39.607472    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:39.607482    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:39.631777    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:39.631789    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:39.643755    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:39.643768    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:39.667757    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:39.667767    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:39.706283    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:39.706291    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:39.720266    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:39.720276    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:39.742599    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:39.742610    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:39.756963    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:39.756975    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:39.772816    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:39.772824    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:42.288668    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:43.117740    9267 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:35:43.117808    9267 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51483-:22,hostfwd=tcp::51484-:2376,hostname=stopped-upgrade-516000 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/disk.qcow2
	I0923 03:35:43.164967    9267 main.go:141] libmachine: STDOUT: 
	I0923 03:35:43.164999    9267 main.go:141] libmachine: STDERR: 
	I0923 03:35:43.165009    9267 main.go:141] libmachine: Waiting for VM to start (ssh -p 51483 docker@127.0.0.1)...
	I0923 03:35:47.290944    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:47.291509    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:47.331069    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:47.331230    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:47.352554    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:47.352732    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:47.369055    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:47.369143    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:47.381965    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:47.382050    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:47.392762    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:47.392845    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:47.403759    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:47.403830    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:47.418714    9103 logs.go:276] 0 containers: []
	W0923 03:35:47.418727    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:47.418804    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:47.432252    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:47.432270    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:47.432276    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:47.468347    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:47.468357    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:47.488966    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:47.488975    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:47.529095    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:47.529104    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:47.540906    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:47.540918    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:47.552301    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:47.552314    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:47.575860    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:47.575871    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:47.587582    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:47.587593    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:47.612256    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:47.612262    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:47.627041    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:47.627054    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:47.642919    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:47.642930    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:47.661114    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:47.661126    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:47.678848    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:47.678859    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:47.683717    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:47.683725    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:47.694532    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:47.694543    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:47.712299    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:47.712311    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:50.226089    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:35:55.228180    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:35:55.228330    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:35:55.239505    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:35:55.239590    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:35:55.250031    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:35:55.250118    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:35:55.260334    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:35:55.260419    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:35:55.271369    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:35:55.271455    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:35:55.281699    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:35:55.281781    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:35:55.292010    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:35:55.292094    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:35:55.302543    9103 logs.go:276] 0 containers: []
	W0923 03:35:55.302558    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:35:55.302624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:35:55.313615    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:35:55.313640    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:35:55.313644    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:35:55.353612    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:35:55.353629    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:35:55.365671    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:35:55.365683    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:35:55.383518    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:35:55.383529    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:35:55.388121    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:35:55.388128    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:35:55.402346    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:35:55.402356    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:35:55.419205    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:35:55.419220    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:35:55.434421    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:35:55.434432    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:35:55.452055    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:35:55.452069    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:35:55.494477    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:35:55.494489    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:35:55.508981    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:35:55.508993    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:35:55.522138    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:35:55.522153    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:35:55.533949    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:35:55.533964    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:35:55.559302    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:35:55.559312    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:35:55.581224    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:35:55.581238    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:35:55.593318    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:35:55.593329    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:35:58.107398    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:03.109658    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:03.109947    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:03.139733    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:03.139897    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:03.157584    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:03.157691    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:03.170671    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:03.170758    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:03.182073    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:03.182162    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:03.192019    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:03.192096    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:03.202682    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:03.202761    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:03.212708    9103 logs.go:276] 0 containers: []
	W0923 03:36:03.212718    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:03.212794    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:03.225856    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:03.225877    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:03.225890    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:03.251258    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:03.251266    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:03.266282    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:03.266293    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:03.278192    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:03.278202    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:03.292171    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:03.292183    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:03.555915    9267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/config.json ...
	I0923 03:36:03.556146    9267 machine.go:93] provisionDockerMachine start ...
	I0923 03:36:03.556196    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.556327    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.556331    9267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 03:36:03.620919    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 03:36:03.620935    9267 buildroot.go:166] provisioning hostname "stopped-upgrade-516000"
	I0923 03:36:03.621000    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.621134    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.621140    9267 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-516000 && echo "stopped-upgrade-516000" | sudo tee /etc/hostname
	I0923 03:36:03.690425    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-516000
	
	I0923 03:36:03.690492    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.690598    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.690609    9267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-516000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-516000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-516000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 03:36:03.755578    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 03:36:03.755595    9267 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19689-6600/.minikube CaCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19689-6600/.minikube}
	I0923 03:36:03.755603    9267 buildroot.go:174] setting up certificates
	I0923 03:36:03.755608    9267 provision.go:84] configureAuth start
	I0923 03:36:03.755613    9267 provision.go:143] copyHostCerts
	I0923 03:36:03.755688    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem, removing ...
	I0923 03:36:03.755694    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem
	I0923 03:36:03.755805    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem (1078 bytes)
	I0923 03:36:03.755989    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem, removing ...
	I0923 03:36:03.755993    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem
	I0923 03:36:03.756056    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem (1123 bytes)
	I0923 03:36:03.756167    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem, removing ...
	I0923 03:36:03.756171    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem
	I0923 03:36:03.756212    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem (1675 bytes)
	I0923 03:36:03.756307    9267 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-516000 san=[127.0.0.1 localhost minikube stopped-upgrade-516000]
	I0923 03:36:03.862839    9267 provision.go:177] copyRemoteCerts
	I0923 03:36:03.862876    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 03:36:03.862889    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:03.897920    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 03:36:03.904435    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 03:36:03.911684    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 03:36:03.918843    9267 provision.go:87] duration metric: took 163.228583ms to configureAuth
	I0923 03:36:03.918852    9267 buildroot.go:189] setting minikube options for container-runtime
	I0923 03:36:03.918965    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:36:03.919004    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.919101    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.919106    9267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 03:36:03.981702    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 03:36:03.981710    9267 buildroot.go:70] root file system type: tmpfs
	I0923 03:36:03.981769    9267 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 03:36:03.981821    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.981926    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.981958    9267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 03:36:04.050018    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 03:36:04.050083    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:04.050203    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:04.050215    9267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 03:36:04.392146    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 03:36:04.392160    9267 machine.go:96] duration metric: took 836.027167ms to provisionDockerMachine
	I0923 03:36:04.392168    9267 start.go:293] postStartSetup for "stopped-upgrade-516000" (driver="qemu2")
	I0923 03:36:04.392175    9267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 03:36:04.392246    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 03:36:04.392255    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:04.428265    9267 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 03:36:04.429603    9267 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 03:36:04.429610    9267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/addons for local assets ...
	I0923 03:36:04.429682    9267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/files for local assets ...
	I0923 03:36:04.429776    9267 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem -> 71212.pem in /etc/ssl/certs
	I0923 03:36:04.429877    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 03:36:04.432573    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:36:04.439899    9267 start.go:296] duration metric: took 47.726583ms for postStartSetup
	I0923 03:36:04.439915    9267 fix.go:56] duration metric: took 21.335105833s for fixHost
	I0923 03:36:04.439955    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:04.440060    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:04.440066    9267 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 03:36:04.505113    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727087764.535849879
	
	I0923 03:36:04.505121    9267 fix.go:216] guest clock: 1727087764.535849879
	I0923 03:36:04.505129    9267 fix.go:229] Guest: 2024-09-23 03:36:04.535849879 -0700 PDT Remote: 2024-09-23 03:36:04.439917 -0700 PDT m=+21.456803043 (delta=95.932879ms)
	I0923 03:36:04.505143    9267 fix.go:200] guest clock delta is within tolerance: 95.932879ms
	I0923 03:36:04.505147    9267 start.go:83] releasing machines lock for "stopped-upgrade-516000", held for 21.400347334s
	I0923 03:36:04.505212    9267 ssh_runner.go:195] Run: cat /version.json
	I0923 03:36:04.505220    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:04.505243    9267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 03:36:04.505267    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	W0923 03:36:04.505808    9267 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51483: connect: connection refused
	I0923 03:36:04.505834    9267 retry.go:31] will retry after 231.011069ms: dial tcp [::1]:51483: connect: connection refused
	W0923 03:36:04.537028    9267 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 03:36:04.537092    9267 ssh_runner.go:195] Run: systemctl --version
	I0923 03:36:04.538952    9267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 03:36:04.540578    9267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 03:36:04.540614    9267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 03:36:04.543854    9267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 03:36:04.548731    9267 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 03:36:04.548743    9267 start.go:495] detecting cgroup driver to use...
	I0923 03:36:04.548834    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:36:04.556144    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 03:36:04.559591    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 03:36:04.563059    9267 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 03:36:04.563094    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 03:36:04.566080    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:36:04.569023    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 03:36:04.572291    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:36:04.575834    9267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 03:36:04.579339    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 03:36:04.582337    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 03:36:04.585142    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 03:36:04.588405    9267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 03:36:04.591471    9267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 03:36:04.594192    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:04.658974    9267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 03:36:04.664682    9267 start.go:495] detecting cgroup driver to use...
	I0923 03:36:04.664764    9267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 03:36:04.670777    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:36:04.675662    9267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 03:36:04.683372    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:36:04.688277    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 03:36:04.693811    9267 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 03:36:04.731260    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 03:36:04.736172    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:36:04.743087    9267 ssh_runner.go:195] Run: which cri-dockerd
	I0923 03:36:04.744554    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 03:36:04.747649    9267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 03:36:04.752706    9267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 03:36:04.813259    9267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 03:36:04.872767    9267 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 03:36:04.872820    9267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 03:36:04.877989    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:04.936576    9267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:36:06.073228    9267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.136659708s)
	I0923 03:36:06.073293    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 03:36:06.078243    9267 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 03:36:06.083807    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:36:06.089473    9267 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 03:36:06.153052    9267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 03:36:06.214708    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:06.274457    9267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 03:36:06.280208    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:36:06.284982    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:06.351053    9267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 03:36:06.388650    9267 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 03:36:06.388750    9267 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 03:36:06.391421    9267 start.go:563] Will wait 60s for crictl version
	I0923 03:36:06.391478    9267 ssh_runner.go:195] Run: which crictl
	I0923 03:36:06.392880    9267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 03:36:06.407467    9267 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 03:36:06.407553    9267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:36:06.423584    9267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:36:06.444286    9267 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 03:36:06.444420    9267 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 03:36:06.445729    9267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 03:36:06.449159    9267 kubeadm.go:883] updating cluster {Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 03:36:06.449206    9267 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:36:06.449253    9267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:36:06.459663    9267 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:36:06.459681    9267 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:36:06.459729    9267 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:36:06.463219    9267 ssh_runner.go:195] Run: which lz4
	I0923 03:36:06.464492    9267 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 03:36:06.465900    9267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 03:36:06.465911    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 03:36:07.380692    9267 docker.go:649] duration metric: took 916.262291ms to copy over tarball
	I0923 03:36:07.380756    9267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 03:36:03.308974    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:03.308984    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:03.320325    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:03.320339    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:03.334083    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:03.334096    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:03.347685    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:03.347699    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:03.383611    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:03.383627    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:03.421701    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:03.421713    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:03.426112    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:03.426117    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:03.437178    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:03.437190    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:03.448830    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:03.448840    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:03.460246    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:03.460254    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:03.479853    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:03.479871    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:06.005820    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:08.528354    9267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147609459s)
	I0923 03:36:08.528367    9267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 03:36:08.544219    9267 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:36:08.547625    9267 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 03:36:08.552659    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:08.615756    9267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:36:10.301822    9267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.686086459s)
	I0923 03:36:10.301949    9267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:36:10.314043    9267 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:36:10.314053    9267 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:36:10.314059    9267 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 03:36:10.317600    9267 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:10.319367    9267 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.321245    9267 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.321402    9267 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:10.323733    9267 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.323843    9267 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.325266    9267 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.325395    9267 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 03:36:10.326577    9267 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.326651    9267 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.327746    9267 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 03:36:10.327940    9267 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.329231    9267 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.329328    9267 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.329887    9267 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.330708    9267 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.769115    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 03:36:10.775902    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.783588    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.789248    9267 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 03:36:10.789276    9267 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 03:36:10.789344    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 03:36:10.791052    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.805506    9267 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 03:36:10.805529    9267 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	W0923 03:36:10.805510    9267 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 03:36:10.805593    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.805676    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.808636    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.811249    9267 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 03:36:10.811267    9267 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.811321    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.826841    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.828799    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 03:36:10.828821    9267 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 03:36:10.828836    9267 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.828877    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.828913    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 03:36:10.838877    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 03:36:10.847428    9267 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 03:36:10.847450    9267 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.847470    9267 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 03:36:10.847481    9267 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.847515    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 03:36:10.847521    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.847617    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:36:10.847680    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.851810    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 03:36:10.851834    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 03:36:10.851915    9267 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 03:36:10.851931    9267 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.851974    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.859174    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 03:36:10.874312    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 03:36:10.874337    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 03:36:10.874386    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 03:36:10.874491    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 03:36:10.874604    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:36:10.884245    9267 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 03:36:10.884258    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 03:36:10.887816    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 03:36:10.887864    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 03:36:10.887877    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 03:36:10.947096    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 03:36:10.989822    9267 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:36:10.989841    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 03:36:11.084418    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 03:36:11.179047    9267 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:36:11.179063    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0923 03:36:11.211092    9267 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 03:36:11.211214    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.338404    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 03:36:11.338495    9267 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 03:36:11.338518    9267 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.338595    9267 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.353210    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 03:36:11.353343    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:36:11.354900    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 03:36:11.354915    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 03:36:11.388323    9267 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:36:11.388335    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 03:36:11.627056    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 03:36:11.627097    9267 cache_images.go:92] duration metric: took 1.313059292s to LoadCachedImages
	W0923 03:36:11.627142    9267 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 03:36:11.627150    9267 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 03:36:11.627202    9267 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-516000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 03:36:11.627282    9267 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 03:36:11.642080    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:36:11.642093    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:36:11.642099    9267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 03:36:11.642108    9267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-516000 NodeName:stopped-upgrade-516000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 03:36:11.642173    9267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-516000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 03:36:11.642236    9267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 03:36:11.645082    9267 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 03:36:11.645118    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 03:36:11.648137    9267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 03:36:11.653286    9267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 03:36:11.658077    9267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 03:36:11.663341    9267 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 03:36:11.664412    9267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 03:36:11.667854    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:11.730438    9267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:36:11.737882    9267 certs.go:68] Setting up /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000 for IP: 10.0.2.15
	I0923 03:36:11.737895    9267 certs.go:194] generating shared ca certs ...
	I0923 03:36:11.737903    9267 certs.go:226] acquiring lock for ca certs: {Name:mk939083d37f22e3f0ca1f4aad8fa886b4374915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.738065    9267 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key
	I0923 03:36:11.738112    9267 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key
	I0923 03:36:11.738117    9267 certs.go:256] generating profile certs ...
	I0923 03:36:11.738177    9267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key
	I0923 03:36:11.738193    9267 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c
	I0923 03:36:11.738204    9267 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 03:36:11.812636    9267 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c ...
	I0923 03:36:11.812648    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c: {Name:mk37feb399682a06992ffd6d3e9a9124a477901a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.812947    9267 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c ...
	I0923 03:36:11.812952    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c: {Name:mk39f6efef4910fd0322c7c95819c2a4737e57e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.813089    9267 certs.go:381] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt
	I0923 03:36:11.813220    9267 certs.go:385] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key
	I0923 03:36:11.813353    9267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.key
	I0923 03:36:11.813499    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem (1338 bytes)
	W0923 03:36:11.813522    9267 certs.go:480] ignoring /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121_empty.pem, impossibly tiny 0 bytes
	I0923 03:36:11.813528    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 03:36:11.813552    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem (1078 bytes)
	I0923 03:36:11.813570    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem (1123 bytes)
	I0923 03:36:11.813602    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem (1675 bytes)
	I0923 03:36:11.813640    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:36:11.813971    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 03:36:11.820847    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 03:36:11.827868    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 03:36:11.834803    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 03:36:11.841636    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 03:36:11.848257    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 03:36:11.855492    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 03:36:11.862271    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 03:36:11.868889    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 03:36:11.876166    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem --> /usr/share/ca-certificates/7121.pem (1338 bytes)
	I0923 03:36:11.883189    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /usr/share/ca-certificates/71212.pem (1708 bytes)
	I0923 03:36:11.889692    9267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 03:36:11.894534    9267 ssh_runner.go:195] Run: openssl version
	I0923 03:36:11.896322    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 03:36:11.899695    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.901167    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.901190    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.902946    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 03:36:11.905913    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7121.pem && ln -fs /usr/share/ca-certificates/7121.pem /etc/ssl/certs/7121.pem"
	I0923 03:36:11.908756    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.910147    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:19 /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.910174    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.911967    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7121.pem /etc/ssl/certs/51391683.0"
	I0923 03:36:11.915470    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71212.pem && ln -fs /usr/share/ca-certificates/71212.pem /etc/ssl/certs/71212.pem"
	I0923 03:36:11.918502    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.919915    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:19 /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.919937    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.921721    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71212.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 03:36:11.924753    9267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 03:36:11.926284    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 03:36:11.928334    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 03:36:11.930132    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 03:36:11.932057    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 03:36:11.933885    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 03:36:11.935541    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 03:36:11.937318    9267 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:36:11.937394    9267 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:36:11.947271    9267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 03:36:11.950182    9267 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 03:36:11.950195    9267 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 03:36:11.950223    9267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 03:36:11.952953    9267 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:36:11.953243    9267 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-516000" does not appear in /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:36:11.953350    9267 kubeconfig.go:62] /Users/jenkins/minikube-integration/19689-6600/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-516000" cluster setting kubeconfig missing "stopped-upgrade-516000" context setting]
	I0923 03:36:11.953550    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.953990    9267 kapi.go:59] client config for stopped-upgrade-516000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10675a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:36:11.954316    9267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 03:36:11.956945    9267 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-516000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 03:36:11.956949    9267 kubeadm.go:1160] stopping kube-system containers ...
	I0923 03:36:11.956995    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:36:11.967608    9267 docker.go:483] Stopping containers: [66fdd05327e1 b7f7027cb0f6 560a63128e94 c1860da13243 d3552f071944 3f6ad30554d6 e970dd8e9394 16c4caebd050]
	I0923 03:36:11.967690    9267 ssh_runner.go:195] Run: docker stop 66fdd05327e1 b7f7027cb0f6 560a63128e94 c1860da13243 d3552f071944 3f6ad30554d6 e970dd8e9394 16c4caebd050
	I0923 03:36:11.978321    9267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 03:36:11.983841    9267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:36:11.986950    9267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:36:11.986955    9267 kubeadm.go:157] found existing configuration files:
	
	I0923 03:36:11.986979    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf
	I0923 03:36:11.989455    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:36:11.989484    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:36:11.992339    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf
	I0923 03:36:11.995409    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:36:11.995434    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:36:11.998193    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf
	I0923 03:36:12.000674    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:36:12.000694    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:36:12.003660    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf
	I0923 03:36:12.006542    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:36:12.006566    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:36:12.008992    9267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:36:12.012125    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.034896    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.343736    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.458246    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.486858    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.512939    9267 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:36:12.513019    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:11.006670    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:11.006771    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:11.019288    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:11.019377    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:11.031249    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:11.031337    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:11.045889    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:11.045976    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:11.058059    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:11.058151    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:11.070894    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:11.070986    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:11.084352    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:11.084446    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:11.096148    9103 logs.go:276] 0 containers: []
	W0923 03:36:11.096161    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:11.096238    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:11.107890    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:11.107908    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:11.107914    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:11.123251    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:11.123263    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:11.135669    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:11.135682    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:11.150710    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:11.150722    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:11.170234    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:11.170248    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:11.183309    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:11.183322    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:11.187905    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:11.187917    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:11.213549    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:11.213564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:11.232838    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:11.232849    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:11.255298    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:11.255316    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:11.299035    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:11.299051    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:11.316526    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:11.316540    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:11.330252    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:11.330263    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:11.369469    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:11.369483    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:11.388796    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:11.388807    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:11.401877    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:11.401889    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:13.014781    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:13.515048    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:13.519491    9267 api_server.go:72] duration metric: took 1.00657525s to wait for apiserver process to appear ...
	I0923 03:36:13.519501    9267 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:36:13.519511    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:13.930999    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:18.521485    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:18.521526    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:18.933147    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:18.933365    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:18.948055    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:18.948170    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:18.962306    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:18.962400    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:18.974145    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:18.974229    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:18.985471    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:18.985556    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:19.004122    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:19.004198    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:19.015350    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:19.015437    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:19.031778    9103 logs.go:276] 0 containers: []
	W0923 03:36:19.031791    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:19.031866    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:19.043237    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:19.043259    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:19.043266    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:19.055616    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:19.055628    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:19.076519    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:19.076530    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:19.094931    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:19.094943    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:19.110917    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:19.110929    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:19.123113    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:19.123124    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:19.165612    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:19.165627    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:19.203852    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:19.203864    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:19.208456    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:19.208468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:19.225481    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:19.225498    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:19.239160    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:19.239173    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:19.257625    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:19.257643    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:19.270190    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:19.270201    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:19.296018    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:19.296028    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:19.307882    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:19.307895    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:19.322225    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:19.322236    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:21.840564    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:23.521683    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:23.521721    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:26.842727    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:26.842895    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:26.856913    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:26.857017    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:26.868014    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:26.868103    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:26.878404    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:26.878480    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:26.889064    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:26.889154    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:26.900740    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:26.900812    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:26.919836    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:26.919918    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:26.929748    9103 logs.go:276] 0 containers: []
	W0923 03:36:26.929758    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:26.929818    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:26.945075    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:26.945093    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:26.945099    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:26.958672    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:26.958685    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:26.975109    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:26.975122    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:26.987442    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:26.987452    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:26.999378    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:26.999389    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:27.010928    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:27.010941    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:27.015135    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:27.015144    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:27.029121    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:27.029130    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:27.049470    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:27.049483    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:27.062016    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:27.062027    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:27.096896    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:27.096911    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:27.113965    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:27.113978    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:27.130014    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:27.130027    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:27.141751    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:27.141762    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:27.164939    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:27.164948    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:27.204299    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:27.204311    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:28.521937    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:28.521969    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:29.729771    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:33.522361    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:33.522467    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:34.731891    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:34.732009    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:36:34.742910    9103 logs.go:276] 2 containers: [ec4832fd0615 f734c17924bf]
	I0923 03:36:34.742992    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:36:34.754372    9103 logs.go:276] 2 containers: [4e982c61da4a 5d3d5fd4ca58]
	I0923 03:36:34.754451    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:36:34.765334    9103 logs.go:276] 1 containers: [069a5dfff1e2]
	I0923 03:36:34.765412    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:36:34.775847    9103 logs.go:276] 2 containers: [e4cf8e26a781 055f29a011ae]
	I0923 03:36:34.775918    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:36:34.789165    9103 logs.go:276] 1 containers: [b396c0cb3924]
	I0923 03:36:34.789245    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:36:34.799708    9103 logs.go:276] 2 containers: [5c6dc7823878 82480c643115]
	I0923 03:36:34.799788    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:36:34.809634    9103 logs.go:276] 0 containers: []
	W0923 03:36:34.809645    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:36:34.809711    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:36:34.824263    9103 logs.go:276] 1 containers: [a7ae46e29668]
	I0923 03:36:34.824280    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:36:34.824286    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:36:34.828727    9103 logs.go:123] Gathering logs for kube-apiserver [ec4832fd0615] ...
	I0923 03:36:34.828736    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec4832fd0615"
	I0923 03:36:34.842686    9103 logs.go:123] Gathering logs for coredns [069a5dfff1e2] ...
	I0923 03:36:34.842696    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069a5dfff1e2"
	I0923 03:36:34.855372    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:36:34.855383    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:36:34.910980    9103 logs.go:123] Gathering logs for kube-apiserver [f734c17924bf] ...
	I0923 03:36:34.910995    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f734c17924bf"
	I0923 03:36:34.931056    9103 logs.go:123] Gathering logs for etcd [5d3d5fd4ca58] ...
	I0923 03:36:34.931066    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3d5fd4ca58"
	I0923 03:36:34.949514    9103 logs.go:123] Gathering logs for kube-proxy [b396c0cb3924] ...
	I0923 03:36:34.949528    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b396c0cb3924"
	I0923 03:36:34.961082    9103 logs.go:123] Gathering logs for kube-controller-manager [82480c643115] ...
	I0923 03:36:34.961092    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82480c643115"
	I0923 03:36:34.972257    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:36:34.972267    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:36:35.011284    9103 logs.go:123] Gathering logs for etcd [4e982c61da4a] ...
	I0923 03:36:35.011300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e982c61da4a"
	I0923 03:36:35.025988    9103 logs.go:123] Gathering logs for kube-scheduler [055f29a011ae] ...
	I0923 03:36:35.025999    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055f29a011ae"
	I0923 03:36:35.040980    9103 logs.go:123] Gathering logs for storage-provisioner [a7ae46e29668] ...
	I0923 03:36:35.040993    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ae46e29668"
	I0923 03:36:35.052223    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:36:35.052233    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:36:35.063940    9103 logs.go:123] Gathering logs for kube-scheduler [e4cf8e26a781] ...
	I0923 03:36:35.063950    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4cf8e26a781"
	I0923 03:36:35.082561    9103 logs.go:123] Gathering logs for kube-controller-manager [5c6dc7823878] ...
	I0923 03:36:35.082571    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6dc7823878"
	I0923 03:36:35.099889    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:36:35.099899    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:36:37.625687    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:38.523473    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:38.523535    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:42.627892    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:42.627971    9103 kubeadm.go:597] duration metric: took 4m4.027797917s to restartPrimaryControlPlane
	W0923 03:36:42.628026    9103 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 03:36:42.628047    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 03:36:43.604367    9103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 03:36:43.609294    9103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:36:43.612087    9103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:36:43.614835    9103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:36:43.614841    9103 kubeadm.go:157] found existing configuration files:
	
	I0923 03:36:43.614870    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf
	I0923 03:36:43.617964    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:36:43.617990    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:36:43.621405    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf
	I0923 03:36:43.624044    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:36:43.624073    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:36:43.626578    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf
	I0923 03:36:43.629704    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:36:43.629729    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:36:43.633000    9103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf
	I0923 03:36:43.635513    9103 kubeadm.go:163] "https://control-plane.minikube.internal:51268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:36:43.635538    9103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:36:43.638200    9103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 03:36:43.657309    9103 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 03:36:43.657352    9103 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 03:36:43.706954    9103 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 03:36:43.707012    9103 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 03:36:43.707061    9103 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 03:36:43.761350    9103 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 03:36:43.765558    9103 out.go:235]   - Generating certificates and keys ...
	I0923 03:36:43.765594    9103 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 03:36:43.765627    9103 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 03:36:43.765671    9103 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 03:36:43.765705    9103 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 03:36:43.765742    9103 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 03:36:43.765771    9103 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 03:36:43.765832    9103 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 03:36:43.765867    9103 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 03:36:43.765931    9103 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 03:36:43.765971    9103 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 03:36:43.765987    9103 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 03:36:43.766018    9103 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 03:36:43.849251    9103 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 03:36:44.033484    9103 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 03:36:44.075720    9103 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 03:36:44.146643    9103 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 03:36:44.174318    9103 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 03:36:44.174739    9103 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 03:36:44.174854    9103 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 03:36:44.259521    9103 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 03:36:43.524419    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:43.524440    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:44.267613    9103 out.go:235]   - Booting up control plane ...
	I0923 03:36:44.267725    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 03:36:44.267777    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 03:36:44.267887    9103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 03:36:44.267932    9103 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 03:36:44.268022    9103 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 03:36:48.269635    9103 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003058 seconds
	I0923 03:36:48.269693    9103 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 03:36:48.273064    9103 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 03:36:48.784470    9103 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 03:36:48.784666    9103 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-515000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 03:36:49.288053    9103 kubeadm.go:310] [bootstrap-token] Using token: l9acn1.amb07pew0jrfe2vi
	I0923 03:36:49.294427    9103 out.go:235]   - Configuring RBAC rules ...
	I0923 03:36:49.294492    9103 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 03:36:49.294545    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 03:36:49.296579    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 03:36:49.297928    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 03:36:49.298793    9103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 03:36:49.299665    9103 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 03:36:49.303103    9103 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 03:36:49.477770    9103 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 03:36:49.691979    9103 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 03:36:49.692474    9103 kubeadm.go:310] 
	I0923 03:36:49.692508    9103 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 03:36:49.692511    9103 kubeadm.go:310] 
	I0923 03:36:49.692564    9103 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 03:36:49.692573    9103 kubeadm.go:310] 
	I0923 03:36:49.692590    9103 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 03:36:49.692617    9103 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 03:36:49.692642    9103 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 03:36:49.692644    9103 kubeadm.go:310] 
	I0923 03:36:49.692672    9103 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 03:36:49.692676    9103 kubeadm.go:310] 
	I0923 03:36:49.692701    9103 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 03:36:49.692703    9103 kubeadm.go:310] 
	I0923 03:36:49.692729    9103 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 03:36:49.692773    9103 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 03:36:49.692816    9103 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 03:36:49.692820    9103 kubeadm.go:310] 
	I0923 03:36:49.692865    9103 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 03:36:49.692914    9103 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 03:36:49.692916    9103 kubeadm.go:310] 
	I0923 03:36:49.692958    9103 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l9acn1.amb07pew0jrfe2vi \
	I0923 03:36:49.693012    9103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f \
	I0923 03:36:49.693028    9103 kubeadm.go:310] 	--control-plane 
	I0923 03:36:49.693031    9103 kubeadm.go:310] 
	I0923 03:36:49.693090    9103 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 03:36:49.693096    9103 kubeadm.go:310] 
	I0923 03:36:49.693155    9103 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l9acn1.amb07pew0jrfe2vi \
	I0923 03:36:49.693215    9103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f 
	I0923 03:36:49.693272    9103 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 03:36:49.693279    9103 cni.go:84] Creating CNI manager for ""
	I0923 03:36:49.693288    9103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:36:49.696740    9103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 03:36:49.703849    9103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 03:36:49.706762    9103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 03:36:49.711498    9103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 03:36:49.711545    9103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 03:36:49.711576    9103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-515000 minikube.k8s.io/updated_at=2024_09_23T03_36_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=running-upgrade-515000 minikube.k8s.io/primary=true
	I0923 03:36:49.756157    9103 kubeadm.go:1113] duration metric: took 44.651083ms to wait for elevateKubeSystemPrivileges
	I0923 03:36:49.756182    9103 ops.go:34] apiserver oom_adj: -16
	I0923 03:36:49.756228    9103 kubeadm.go:394] duration metric: took 4m11.170342708s to StartCluster
	I0923 03:36:49.756240    9103 settings.go:142] acquiring lock: {Name:mk179b7e7e669ed9fc071f7eb5301e91538a634e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:49.756401    9103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:36:49.756810    9103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:49.757008    9103 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:36:49.757016    9103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 03:36:49.757085    9103 config.go:182] Loaded profile config "running-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:36:49.757114    9103 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-515000"
	I0923 03:36:49.757090    9103 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-515000"
	I0923 03:36:49.757130    9103 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-515000"
	W0923 03:36:49.757135    9103 addons.go:243] addon storage-provisioner should already be in state true
	I0923 03:36:49.757149    9103 host.go:66] Checking if "running-upgrade-515000" exists ...
	I0923 03:36:49.757121    9103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-515000"
	I0923 03:36:49.757997    9103 kapi.go:59] client config for running-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/running-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c06030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:36:49.758126    9103 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-515000"
	W0923 03:36:49.758130    9103 addons.go:243] addon default-storageclass should already be in state true
	I0923 03:36:49.758137    9103 host.go:66] Checking if "running-upgrade-515000" exists ...
	I0923 03:36:49.761781    9103 out.go:177] * Verifying Kubernetes components...
	I0923 03:36:49.762141    9103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 03:36:49.765884    9103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 03:36:49.765891    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:36:49.769743    9103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:48.525454    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:48.525496    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:49.773674    9103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:49.776705    9103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:36:49.776711    9103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 03:36:49.776716    9103 sshutil.go:53] new ssh client: &{IP:localhost Port:51236 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/running-upgrade-515000/id_rsa Username:docker}
	I0923 03:36:49.872302    9103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:36:49.877022    9103 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:36:49.877080    9103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:49.881221    9103 api_server.go:72] duration metric: took 124.205083ms to wait for apiserver process to appear ...
	I0923 03:36:49.881228    9103 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:36:49.881235    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:49.914238    9103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 03:36:49.940286    9103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:36:50.224439    9103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 03:36:50.224451    9103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 03:36:53.526988    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:53.527057    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:54.883190    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:54.883220    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:58.529208    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:58.529256    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:59.883411    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:59.883455    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:03.531386    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:03.531408    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:04.883693    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:04.883730    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:08.533477    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:08.533510    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:09.884059    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:09.884084    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:13.535673    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:13.535956    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:13.556912    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:13.557029    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:13.571808    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:13.571906    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:13.584051    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:13.584137    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:13.595105    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:13.595190    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:13.605577    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:13.605660    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:13.616444    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:13.616527    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:13.626265    9267 logs.go:276] 0 containers: []
	W0923 03:37:13.626278    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:13.626358    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:13.639805    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:13.639827    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:13.639833    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:13.653277    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:13.653289    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:13.671104    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:13.671118    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:13.696644    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:13.696652    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:13.736515    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:13.736525    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:13.777710    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:13.777724    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:13.789130    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:13.789142    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:13.794010    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:13.794016    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:13.809640    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:13.809655    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:13.824493    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:13.824509    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:13.836320    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:13.836333    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:13.939188    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:13.939202    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:13.950276    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:13.950288    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:13.964788    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:13.964799    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:13.975883    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:13.975896    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:13.989802    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:13.989814    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:14.004386    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:14.004397    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:16.520184    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:14.884585    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:14.884644    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:19.884793    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:19.884832    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 03:37:20.226232    9103 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 03:37:20.229636    9103 out.go:177] * Enabled addons: storage-provisioner
	I0923 03:37:21.520500    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:21.520930    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:21.553423    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:21.553574    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:21.573380    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:21.573509    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:21.592117    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:21.592204    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:21.603774    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:21.603848    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:21.614265    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:21.614364    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:21.624940    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:21.625019    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:21.636014    9267 logs.go:276] 0 containers: []
	W0923 03:37:21.636025    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:21.636095    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:21.646909    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:21.646928    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:21.646935    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:21.659547    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:21.659558    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:21.674773    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:21.674784    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:21.686292    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:21.686303    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:21.728610    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:21.728620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:21.744465    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:21.744475    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:21.757372    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:21.757383    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:21.777058    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:21.777069    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:21.792278    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:21.792286    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:21.831892    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:21.831904    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:21.836001    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:21.836008    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:21.849707    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:21.849719    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:21.875097    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:21.875105    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:21.887539    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:21.887548    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:21.899707    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:21.899723    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:21.940893    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:21.940908    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:21.955511    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:21.955524    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:20.242574    9103 addons.go:510] duration metric: took 30.486225625s for enable addons: enabled=[storage-provisioner]
	I0923 03:37:24.469432    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:24.885608    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:24.885655    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:29.471407    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:29.471983    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:29.506246    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:29.506403    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:29.526226    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:29.526339    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:29.541649    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:29.541741    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:29.554191    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:29.554282    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:29.566999    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:29.567081    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:29.578164    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:29.578253    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:29.589042    9267 logs.go:276] 0 containers: []
	W0923 03:37:29.589053    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:29.589116    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:29.600835    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:29.600862    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:29.600868    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:29.619419    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:29.619432    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:29.631620    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:29.631629    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:29.649271    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:29.649284    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:29.660848    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:29.660863    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:29.672848    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:29.672860    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:29.712972    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:29.712980    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:29.749179    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:29.749193    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:29.764321    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:29.764331    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:29.775472    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:29.775484    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:29.799023    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:29.799034    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:29.813311    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:29.813320    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:29.852033    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:29.852044    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:29.863627    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:29.863640    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:29.877963    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:29.877973    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:29.895185    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:29.895199    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:29.899325    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:29.899331    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:32.414449    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:29.886590    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:29.886613    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:37.416653    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:37.416816    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:37.431547    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:37.431627    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:37.442286    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:37.442361    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:37.452735    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:37.452820    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:37.463451    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:37.463530    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:37.473871    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:37.473950    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:37.484852    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:37.484934    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:37.494594    9267 logs.go:276] 0 containers: []
	W0923 03:37:37.494611    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:37.494681    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:37.505339    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:37.505357    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:37.505362    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:37.543115    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:37.543131    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:37.579453    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:37.579464    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:37.594686    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:37.594697    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:37.611769    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:37.611780    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:37.625953    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:37.625963    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:37.640535    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:37.640546    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:37.652209    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:37.652225    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:37.663772    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:37.663782    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:37.675705    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:37.675714    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:37.719109    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:37.719122    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:37.732000    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:37.732011    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:37.743575    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:37.743586    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:37.755275    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:37.755286    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:37.778901    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:37.778908    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:37.782600    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:37.782606    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:37.802654    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:37.802670    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:34.887751    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:34.887778    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:40.321044    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:39.889264    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:39.889308    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:45.323133    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:45.323288    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:45.340321    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:45.340427    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:45.354007    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:45.354096    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:45.365293    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:45.365367    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:45.379160    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:45.379244    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:45.389822    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:45.389901    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:45.400796    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:45.400869    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:45.411197    9267 logs.go:276] 0 containers: []
	W0923 03:37:45.411210    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:45.411282    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:45.421728    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:45.421746    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:45.421753    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:45.433306    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:45.433322    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:45.471669    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:45.471677    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:45.476181    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:45.476188    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:45.512883    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:45.512898    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:45.535075    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:45.535085    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:45.547300    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:45.547310    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:45.564364    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:45.564374    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:45.575315    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:45.575330    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:45.586162    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:45.586172    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:45.597399    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:45.597414    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:45.611628    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:45.611637    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:45.650022    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:45.650037    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:45.664959    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:45.664970    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:45.676802    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:45.676812    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:45.692506    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:45.692518    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:45.716868    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:45.716878    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:44.891231    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:44.891288    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:48.232765    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:49.893564    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:49.893739    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:49.919876    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:37:49.920077    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:49.934400    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:37:49.934468    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:49.944794    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:37:49.944860    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:49.955193    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:37:49.955263    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:49.965790    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:37:49.965862    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:49.980902    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:37:49.980966    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:49.990910    9103 logs.go:276] 0 containers: []
	W0923 03:37:49.990920    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:49.990980    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:50.001374    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:37:50.001387    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:37:50.001392    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:37:50.018429    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:37:50.018438    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:50.029841    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:37:50.029851    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:37:50.044599    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:37:50.044610    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:37:50.056256    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:37:50.056270    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:37:50.075953    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:37:50.075963    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:37:50.090686    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:37:50.090699    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:37:50.102262    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:37:50.102281    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:37:50.114538    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:50.114548    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:50.138536    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:50.138545    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:50.175109    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:50.175118    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:50.179394    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:50.179400    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:50.216822    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:37:50.216834    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:37:52.733596    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:53.233060    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:53.233309    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:53.253630    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:53.253744    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:53.267939    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:53.268028    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:53.279758    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:53.279844    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:53.294673    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:53.294754    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:53.305501    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:53.305577    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:53.315644    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:53.315725    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:53.325972    9267 logs.go:276] 0 containers: []
	W0923 03:37:53.325983    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:53.326049    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:53.336430    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:53.336446    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:53.336452    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:53.354049    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:53.354059    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:53.366188    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:53.366198    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:53.379753    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:53.379763    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:53.391365    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:53.391378    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:53.417184    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:53.417191    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:53.434258    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:53.434268    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:53.449337    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:53.449353    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:53.461297    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:53.461314    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:53.472742    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:53.472754    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:53.476882    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:53.476890    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:53.492825    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:53.492838    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:53.534388    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:53.534399    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:53.545871    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:53.545884    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:53.583116    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:53.583127    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:53.619649    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:53.619664    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:53.637838    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:53.637853    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:56.157993    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:57.733960    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:57.734172    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:57.749932    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:37:57.750042    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:57.761538    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:37:57.761623    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:57.772946    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:37:57.773030    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:57.786250    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:37:57.786336    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:57.799460    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:37:57.799540    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:57.809919    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:37:57.809998    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:57.820042    9103 logs.go:276] 0 containers: []
	W0923 03:37:57.820053    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:57.820127    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:57.832215    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:37:57.832229    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:57.832234    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:57.836654    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:37:57.836663    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:37:57.849257    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:37:57.849274    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:37:57.864126    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:37:57.864141    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:37:57.881124    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:37:57.881138    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:37:57.892420    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:57.892434    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:57.917789    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:57.917796    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:57.955244    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:57.955251    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:57.991693    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:37:57.991705    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:37:58.006201    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:37:58.006212    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:37:58.020600    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:37:58.020613    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:37:58.031887    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:37:58.031902    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:37:58.043312    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:37:58.043321    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:01.160299    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:01.160408    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:01.171566    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:01.171646    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:01.182363    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:01.182446    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:01.193304    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:01.193388    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:01.205640    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:01.205721    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:01.216040    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:01.216117    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:01.227341    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:01.227445    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:01.237484    9267 logs.go:276] 0 containers: []
	W0923 03:38:01.237495    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:01.237563    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:01.248010    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:01.248026    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:01.248031    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:01.261974    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:01.261984    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:01.276295    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:01.276306    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:01.293609    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:01.293620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:01.310097    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:01.310106    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:01.325189    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:01.325199    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:01.339477    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:01.339487    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:01.351447    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:01.351458    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:01.363646    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:01.363656    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:01.375283    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:01.375293    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:01.387202    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:01.387214    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:01.425535    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:01.425547    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:01.429990    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:01.429998    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:01.444406    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:01.444421    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:01.456321    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:01.456330    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:01.480435    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:01.480442    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:01.521807    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:01.521825    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:00.558698    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:04.061314    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:05.559504    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:05.559834    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:05.585630    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:05.585773    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:05.600780    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:05.600877    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:05.613783    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:05.613861    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:05.624561    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:05.624632    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:05.634851    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:05.634919    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:05.651324    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:05.651409    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:05.662135    9103 logs.go:276] 0 containers: []
	W0923 03:38:05.662149    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:05.662219    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:05.672486    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:05.672500    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:05.672506    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:05.696665    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:05.696676    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:05.709675    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:05.709686    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:05.721754    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:05.721766    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:05.733501    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:05.733517    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:05.768230    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:05.768245    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:05.782481    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:05.782491    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:05.796553    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:05.796564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:05.808795    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:05.808807    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:05.820668    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:05.820678    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:05.835497    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:05.835507    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:05.873344    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:05.873352    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:05.878196    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:05.878202    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:09.063777    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:09.064089    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:09.094126    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:09.094278    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:09.111512    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:09.111616    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:09.125040    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:09.125137    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:09.136136    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:09.136221    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:09.146053    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:09.146135    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:09.161463    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:09.161538    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:09.173023    9267 logs.go:276] 0 containers: []
	W0923 03:38:09.173036    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:09.173103    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:09.183516    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:09.183534    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:09.183539    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:09.198768    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:09.198779    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:09.212004    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:09.212016    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:09.229709    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:09.229722    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:09.248298    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:09.248309    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:09.259984    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:09.259994    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:09.298786    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:09.298795    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:09.313289    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:09.313302    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:09.324805    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:09.324818    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:09.336883    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:09.336893    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:09.374373    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:09.374389    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:09.386124    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:09.386136    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:09.397871    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:09.397882    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:09.402704    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:09.402711    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:09.437859    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:09.437872    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:09.451550    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:09.451560    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:09.469245    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:09.469258    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:11.995445    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:08.397587    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:16.997780    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:16.998008    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:17.019986    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:17.020111    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:17.036327    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:17.036420    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:17.048495    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:17.048579    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:17.059614    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:17.059694    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:17.073964    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:17.074042    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:17.084770    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:17.084858    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:17.095300    9267 logs.go:276] 0 containers: []
	W0923 03:38:17.095317    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:17.095388    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:17.106118    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:17.106136    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:17.106142    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:17.118005    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:17.118019    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:17.132180    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:17.132190    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:17.136344    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:17.136351    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:17.150079    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:17.150091    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:17.164373    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:17.164384    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:17.202320    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:17.202335    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:17.228143    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:17.228155    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:17.242537    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:17.242549    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:17.254620    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:17.254637    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:17.269777    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:17.269787    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:17.287749    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:17.287759    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:17.299123    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:17.299135    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:17.310909    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:17.310920    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:17.322593    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:17.322601    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:17.359609    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:17.359618    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:17.394753    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:17.394763    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:13.399698    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:13.399863    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:13.419338    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:13.419441    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:13.433697    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:13.433791    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:13.445522    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:13.445607    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:13.456346    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:13.456433    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:13.466314    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:13.466390    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:13.476451    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:13.476534    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:13.486818    9103 logs.go:276] 0 containers: []
	W0923 03:38:13.486831    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:13.486904    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:13.497152    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:13.497167    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:13.497173    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:13.509886    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:13.509896    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:13.523171    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:13.523185    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:13.534612    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:13.534624    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:13.552054    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:13.552067    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:13.568989    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:13.569002    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:13.606987    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:13.606994    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:13.611257    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:13.611265    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:13.646176    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:13.646188    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:13.671020    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:13.671029    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:13.681891    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:13.681903    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:13.695940    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:13.695954    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:13.709624    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:13.709635    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:16.226442    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:19.915851    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:21.228604    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:21.228854    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:21.249612    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:21.249725    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:21.264312    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:21.264404    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:21.277102    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:21.277189    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:21.287661    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:21.287740    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:21.298461    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:21.298545    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:21.309432    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:21.309511    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:21.319308    9103 logs.go:276] 0 containers: []
	W0923 03:38:21.319320    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:21.319385    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:21.329793    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:21.329806    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:21.329811    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:21.341523    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:21.341533    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:21.352914    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:21.352926    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:21.371140    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:21.371151    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:21.382512    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:21.382526    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:21.405775    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:21.405784    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:21.417939    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:21.417949    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:21.455567    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:21.455578    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:21.492912    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:21.492927    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:21.507101    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:21.507110    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:21.523798    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:21.523809    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:21.535130    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:21.535140    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:21.549634    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:21.549648    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:24.918156    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:24.918392    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:24.941870    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:24.942060    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:24.959602    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:24.959683    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:24.972284    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:24.972373    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:24.983374    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:24.983455    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:24.993867    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:24.993947    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:25.004478    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:25.004563    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:25.014871    9267 logs.go:276] 0 containers: []
	W0923 03:38:25.014882    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:25.014949    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:25.025546    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:25.025566    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:25.025571    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:25.037531    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:25.037545    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:25.062379    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:25.062388    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:25.087650    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:25.087659    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:25.099937    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:25.099949    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:25.135381    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:25.135398    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:25.175342    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:25.175357    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:25.189872    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:25.189891    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:25.205514    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:25.205528    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:25.219988    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:25.220003    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:25.231735    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:25.231748    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:25.271616    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:25.271630    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:25.275771    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:25.275777    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:25.293021    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:25.293035    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:25.312759    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:25.312770    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:25.328068    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:25.328081    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:25.339169    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:25.339180    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:27.861491    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:24.056383    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:32.862938    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:32.863185    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:32.882375    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:32.882481    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:32.896496    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:32.896591    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:32.908673    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:32.908756    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:32.919404    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:32.919484    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:32.929972    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:32.930056    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:32.940437    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:32.940514    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:32.957302    9267 logs.go:276] 0 containers: []
	W0923 03:38:32.957313    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:32.957383    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:32.967754    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:32.967774    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:32.967779    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:32.971976    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:32.971982    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:32.986351    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:32.986362    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:33.000750    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:33.000760    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:29.058757    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:29.059188    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:29.089406    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:29.089569    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:29.109316    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:29.109430    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:29.124103    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:29.124193    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:29.135898    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:29.135985    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:29.146738    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:29.146814    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:29.158076    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:29.158148    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:29.167761    9103 logs.go:276] 0 containers: []
	W0923 03:38:29.167774    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:29.167843    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:29.178142    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:29.178153    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:29.178158    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:29.215020    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:29.215032    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:29.229664    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:29.229673    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:29.244827    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:29.244838    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:29.259574    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:29.259584    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:29.277166    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:29.277176    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:29.288825    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:29.288838    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:29.314057    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:29.314074    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:29.318383    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:29.318392    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:29.353511    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:29.353522    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:29.367706    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:29.367715    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:29.379280    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:29.379293    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:29.393966    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:29.393977    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:31.907585    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:33.012023    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:33.012034    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:33.027133    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:33.027143    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:33.051033    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:33.051043    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:33.062660    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:33.062670    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:33.099753    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:33.099767    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:33.134013    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:33.134024    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:33.148566    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:33.148581    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:33.161629    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:33.161641    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:33.176926    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:33.176937    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:33.194471    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:33.194484    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:33.210358    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:33.210374    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:33.248625    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:33.248637    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:33.265954    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:33.265970    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:35.779665    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:36.909799    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:36.909984    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:36.924576    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:36.924675    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:36.935873    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:36.935953    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:36.946112    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:36.946198    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:36.956451    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:36.956535    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:36.966909    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:36.966981    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:36.977238    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:36.977311    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:36.987588    9103 logs.go:276] 0 containers: []
	W0923 03:38:36.987599    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:36.987662    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:36.997741    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:36.997755    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:36.997760    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:37.008783    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:37.008795    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:37.033668    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:37.033675    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:37.071781    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:37.071791    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:37.085690    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:37.085703    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:37.097898    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:37.097910    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:37.112518    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:37.112528    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:37.130687    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:37.130702    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:37.142101    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:37.142119    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:37.146529    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:37.146537    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:37.181633    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:37.181648    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:37.198373    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:37.198384    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:37.215317    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:37.215327    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:40.782095    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:40.782324    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:40.800763    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:40.800919    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:40.814351    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:40.814445    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:40.826253    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:40.826338    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:40.837025    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:40.837113    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:40.848041    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:40.848121    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:40.858543    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:40.858631    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:40.868493    9267 logs.go:276] 0 containers: []
	W0923 03:38:40.868502    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:40.868567    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:40.878653    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:40.878671    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:40.878677    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:40.915100    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:40.915113    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:40.952741    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:40.952753    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:40.964995    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:40.965005    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:40.984746    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:40.984755    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:41.007584    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:41.007591    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:41.011519    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:41.011527    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:41.023080    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:41.023092    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:41.034924    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:41.034936    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:41.046228    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:41.046239    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:41.085722    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:41.085731    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:41.100288    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:41.100297    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:41.117546    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:41.117561    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:41.129037    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:41.129046    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:41.143332    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:41.143342    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:41.157760    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:41.157772    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:41.169409    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:41.169418    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:39.734772    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:43.686278    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:44.737026    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:44.737502    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:44.767256    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:44.767377    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:44.783522    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:44.783615    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:44.797129    9103 logs.go:276] 2 containers: [5173168dcb78 a9c407fbfbed]
	I0923 03:38:44.797207    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:44.808458    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:44.808539    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:44.818792    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:44.818866    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:44.829084    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:44.829156    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:44.839633    9103 logs.go:276] 0 containers: []
	W0923 03:38:44.839644    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:44.839707    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:44.850113    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:44.850128    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:44.850134    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:44.891286    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:44.891300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:44.905713    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:44.905725    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:44.922272    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:44.922287    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:44.933955    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:44.933965    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:44.945531    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:44.945544    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:44.969006    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:44.969013    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:44.980205    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:44.980219    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:45.016918    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:45.016928    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:45.021574    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:45.021579    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:45.035760    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:45.035771    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:45.047402    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:45.047417    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:45.067621    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:45.067632    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:47.586674    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:48.688714    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:48.688935    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:48.711962    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:48.712081    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:48.727519    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:48.727623    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:48.741727    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:48.741810    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:48.753256    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:48.753335    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:48.763607    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:48.763686    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:48.774089    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:48.774168    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:48.783956    9267 logs.go:276] 0 containers: []
	W0923 03:38:48.783969    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:48.784036    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:48.794563    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:48.794581    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:48.794587    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:48.806430    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:48.806440    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:48.810986    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:48.810995    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:48.846005    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:48.846017    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:48.858413    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:48.858428    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:48.876573    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:48.876584    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:48.914007    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:48.914015    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:48.950876    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:48.950887    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:48.969370    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:48.969386    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:48.983304    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:48.983313    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:48.994360    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:48.994372    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:49.016995    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:49.017001    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:49.029114    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:49.029125    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:49.043734    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:49.043745    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:49.055383    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:49.055395    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:49.075924    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:49.075934    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:49.090031    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:49.090042    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:51.606336    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:52.588773    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:52.588941    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:52.602479    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:38:52.602556    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:52.612600    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:38:52.612686    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:52.623208    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:38:52.623281    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:52.633561    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:38:52.633647    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:52.644503    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:38:52.644595    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:52.655872    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:38:52.655956    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:52.665703    9103 logs.go:276] 0 containers: []
	W0923 03:38:52.665719    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:52.665798    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:52.676277    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:38:52.676294    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:38:52.676300    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:38:52.687793    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:38:52.687803    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:38:52.699073    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:52.699083    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:52.736355    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:38:52.736363    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:38:52.750792    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:38:52.750802    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:38:52.765060    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:52.765072    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:52.769659    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:52.769668    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:52.803274    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:38:52.803288    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:38:52.817757    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:38:52.817773    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:38:52.829474    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:38:52.829484    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:38:52.844384    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:38:52.844396    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:38:52.861876    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:38:52.861886    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:52.873552    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:38:52.873564    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:38:52.884734    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:38:52.884749    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:38:52.896029    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:52.896042    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:56.607724    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:56.607887    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:56.623597    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:56.623695    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:56.636364    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:56.636468    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:56.647525    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:56.647613    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:56.658166    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:56.658257    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:56.668296    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:56.668371    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:56.678841    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:56.678923    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:56.689923    9267 logs.go:276] 0 containers: []
	W0923 03:38:56.689937    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:56.690009    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:56.700744    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:56.700763    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:56.700769    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:56.714647    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:56.714657    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:56.719116    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:56.719125    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:56.732920    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:56.732930    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:56.756076    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:56.756084    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:56.768215    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:56.768225    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:56.782242    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:56.782251    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:56.799728    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:56.799741    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:56.811247    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:56.811257    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:56.822679    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:56.822689    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:56.857689    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:56.857703    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:56.894296    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:56.894306    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:56.908775    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:56.908788    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:56.923504    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:56.923515    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:56.934660    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:56.934672    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:56.946911    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:56.946924    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:56.958915    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:56.958927    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:55.421973    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:59.500039    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:00.424209    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:00.424438    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:00.445697    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:00.445822    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:00.460899    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:00.460990    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:00.473585    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:00.473674    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:00.485149    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:00.485237    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:00.496539    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:00.496624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:00.506875    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:00.506956    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:00.516813    9103 logs.go:276] 0 containers: []
	W0923 03:39:00.516830    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:00.516898    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:00.527008    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:00.527027    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:00.527033    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:00.562647    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:00.562658    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:00.574284    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:00.574295    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:00.586127    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:00.586142    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:00.600228    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:00.600239    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:00.612609    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:00.612623    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:00.624071    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:00.624083    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:00.636068    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:00.636080    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:00.661721    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:00.661728    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:00.700207    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:00.700216    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:00.704943    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:00.704954    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:00.718980    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:00.718990    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:00.730688    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:00.730700    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:00.745919    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:00.745931    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:00.765872    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:00.765882    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:03.281247    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:04.502215    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:04.502440    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:04.523431    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:04.523532    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:04.536950    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:04.537043    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:04.548587    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:04.548673    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:04.559294    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:04.559375    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:04.570623    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:04.570697    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:04.581780    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:04.581869    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:04.592044    9267 logs.go:276] 0 containers: []
	W0923 03:39:04.592055    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:04.592125    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:04.602380    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:04.602399    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:04.602405    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:04.637200    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:04.637211    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:04.651673    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:04.651685    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:04.689103    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:04.689115    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:04.701118    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:04.701128    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:04.741389    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:04.741397    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:04.756154    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:04.756167    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:04.771229    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:04.771242    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:04.785552    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:04.785562    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:04.802758    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:04.802767    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:04.817348    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:04.817360    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:04.841697    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:04.841707    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:04.846141    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:04.846148    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:04.860175    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:04.860184    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:04.871432    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:04.871445    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:04.882718    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:04.882728    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:04.897675    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:04.897686    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:07.411242    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:08.281815    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:08.282063    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:12.413436    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:12.413636    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:12.428998    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:12.429093    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:12.440153    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:12.440230    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:12.450959    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:12.451047    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:12.461646    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:12.461731    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:12.471961    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:12.472046    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:12.485758    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:12.485838    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:12.495690    9267 logs.go:276] 0 containers: []
	W0923 03:39:12.495701    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:12.495771    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:12.506362    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:12.506380    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:12.506385    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:12.520501    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:12.520510    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:12.563821    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:12.563832    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:12.577829    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:12.577840    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:12.618786    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:12.618799    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:12.633656    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:12.633669    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:12.648548    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:12.648562    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:12.671505    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:12.671524    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:12.684117    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:12.684129    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:12.696126    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:12.696138    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:12.710384    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:12.710395    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:12.724622    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:12.724632    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:12.728802    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:12.728810    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:12.740666    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:12.740678    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:12.752250    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:12.752259    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:12.769216    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:12.769226    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:12.780696    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:12.780706    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:08.304314    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:08.304431    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:08.322241    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:08.322333    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:08.337411    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:08.337485    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:08.348810    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:08.348891    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:08.359075    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:08.359148    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:08.369031    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:08.369108    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:08.379014    9103 logs.go:276] 0 containers: []
	W0923 03:39:08.379026    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:08.379092    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:08.405794    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:08.405812    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:08.405820    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:08.430715    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:08.430730    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:08.455853    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:08.455862    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:08.470552    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:08.470562    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:08.482264    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:08.482274    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:08.493787    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:08.493798    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:08.506545    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:08.506555    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:08.546547    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:08.546561    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:08.560784    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:08.560794    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:08.565349    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:08.565357    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:08.577065    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:08.577077    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:08.588927    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:08.588939    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:08.600430    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:08.600441    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:08.615502    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:08.615511    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:08.651182    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:08.651191    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:11.167720    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:15.318246    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:16.170192    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:16.170468    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:16.197351    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:16.197479    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:16.212604    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:16.212702    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:16.224500    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:16.224590    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:16.235783    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:16.235860    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:16.246448    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:16.246544    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:16.258590    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:16.258679    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:16.272392    9103 logs.go:276] 0 containers: []
	W0923 03:39:16.272406    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:16.272476    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:16.282833    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:16.282851    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:16.282857    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:16.288036    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:16.288047    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:16.302332    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:16.302343    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:16.314295    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:16.314305    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:16.326348    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:16.326365    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:16.350359    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:16.350368    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:16.362467    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:16.362476    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:16.374087    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:16.374097    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:16.389494    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:16.389505    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:16.400825    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:16.400840    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:16.415675    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:16.415691    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:16.427293    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:16.427304    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:16.439652    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:16.439666    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:16.476190    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:16.476204    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:16.511488    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:16.511500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:20.320584    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:20.320874    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:20.351077    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:20.351192    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:20.365646    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:20.365748    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:20.377636    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:20.377717    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:20.388319    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:20.388404    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:20.398477    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:20.398564    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:20.409419    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:20.409500    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:20.425849    9267 logs.go:276] 0 containers: []
	W0923 03:39:20.425862    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:20.425940    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:20.436868    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:20.436884    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:20.436890    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:20.476693    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:20.476702    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:20.488303    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:20.488312    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:20.503096    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:20.503106    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:20.517336    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:20.517348    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:20.531938    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:20.531951    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:20.543565    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:20.543577    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:20.554820    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:20.554829    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:20.566384    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:20.566397    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:20.570555    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:20.570562    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:20.587818    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:20.587832    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:20.607049    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:20.607062    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:20.617850    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:20.617862    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:20.641908    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:20.641915    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:20.676153    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:20.676166    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:20.713587    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:20.713604    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:20.729809    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:20.729822    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:19.031930    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:23.245418    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:24.034161    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:24.034364    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:24.055843    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:24.055964    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:24.078727    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:24.078820    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:24.089847    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:24.089930    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:24.100178    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:24.100263    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:24.113611    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:24.113694    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:24.124229    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:24.124300    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:24.138234    9103 logs.go:276] 0 containers: []
	W0923 03:39:24.138245    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:24.138319    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:24.148608    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:24.148623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:24.148630    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:24.160398    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:24.160408    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:24.172241    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:24.172262    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:24.183830    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:24.183840    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:24.223457    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:24.223468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:24.235832    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:24.235844    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:24.273662    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:24.273674    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:24.285673    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:24.285685    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:24.297565    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:24.297577    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:24.322775    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:24.322786    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:24.334591    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:24.334601    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:24.339452    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:24.339459    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:24.354146    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:24.354160    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:24.368488    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:24.368500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:24.387407    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:24.387416    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:26.907268    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:28.247080    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:28.247313    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:28.263272    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:28.263374    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:28.275198    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:28.275276    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:28.286563    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:28.286646    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:28.297191    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:28.297272    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:28.307783    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:28.307863    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:28.317812    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:28.317894    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:28.328361    9267 logs.go:276] 0 containers: []
	W0923 03:39:28.328374    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:28.328448    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:28.339405    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:28.339426    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:28.339432    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:28.377017    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:28.377029    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:28.415560    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:28.415573    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:28.431103    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:28.431117    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:28.442688    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:28.442700    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:28.446798    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:28.446805    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:28.465055    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:28.465064    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:28.476678    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:28.476690    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:28.491497    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:28.491507    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:28.515872    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:28.515880    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:28.528167    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:28.528176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:28.549690    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:28.549701    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:28.560652    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:28.560666    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:28.595900    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:28.595910    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:28.610165    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:28.610176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:28.632436    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:28.632448    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:28.644348    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:28.644358    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:31.169690    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:31.909490    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:31.909673    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:31.927213    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:31.927317    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:31.940320    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:31.940408    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:31.957552    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:31.957640    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:31.967592    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:31.967678    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:31.978649    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:31.978729    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:31.989429    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:31.989509    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:31.999250    9103 logs.go:276] 0 containers: []
	W0923 03:39:31.999263    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:31.999329    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:32.009657    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:32.009677    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:32.009684    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:32.023455    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:32.023468    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:32.037516    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:32.037531    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:32.049534    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:32.049546    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:32.061558    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:32.061572    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:32.079972    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:32.079983    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:32.091638    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:32.091649    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:32.130349    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:32.130361    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:32.150850    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:32.150860    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:32.163257    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:32.163269    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:32.168197    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:32.168204    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:32.204717    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:32.204732    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:32.221894    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:32.221907    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:32.238174    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:32.238187    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:32.250215    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:32.250228    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:36.171978    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:36.172277    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:36.201863    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:36.201996    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:36.224849    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:36.224944    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:36.237780    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:36.237864    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:36.249766    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:36.249852    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:36.261976    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:36.262051    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:36.272754    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:36.272828    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:36.282688    9267 logs.go:276] 0 containers: []
	W0923 03:39:36.282698    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:36.282762    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:36.295114    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:36.295133    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:36.295139    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:36.309999    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:36.310010    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:36.321712    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:36.321724    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:36.333560    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:36.333571    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:36.368452    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:36.368463    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:36.407204    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:36.407215    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:36.423348    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:36.423360    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:36.435068    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:36.435079    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:36.458353    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:36.458365    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:36.470343    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:36.470354    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:36.481413    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:36.481425    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:36.500046    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:36.500059    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:36.514187    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:36.514201    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:36.553503    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:36.553511    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:36.557838    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:36.557845    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:36.572816    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:36.572829    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:36.584464    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:36.584476    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:34.777400    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:39.109943    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:39.779697    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:39.779869    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:39.797737    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:39.797848    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:39.814148    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:39.814223    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:39.831129    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:39.831199    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:39.842053    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:39.842134    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:39.856371    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:39.856448    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:39.866551    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:39.866624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:39.876926    9103 logs.go:276] 0 containers: []
	W0923 03:39:39.876941    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:39.877012    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:39.887206    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:39.887224    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:39.887230    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:39.926038    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:39.926049    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:39.940154    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:39.940170    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:39.951763    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:39.951774    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:39.963540    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:39.963551    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:39.975261    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:39.975271    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:40.010914    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:40.010925    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:40.025816    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:40.025831    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:40.037710    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:40.037720    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:40.056955    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:40.056970    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:40.068924    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:40.068933    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:40.093468    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:40.093476    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:40.097632    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:40.097639    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:40.109295    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:40.109308    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:40.121166    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:40.121180    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:42.640769    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:44.112487    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:44.112709    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:44.133899    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:44.134070    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:44.155366    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:44.155439    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:44.166586    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:44.166662    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:44.178267    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:44.178367    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:44.190554    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:44.190641    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:44.206683    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:44.206770    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:44.217861    9267 logs.go:276] 0 containers: []
	W0923 03:39:44.217875    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:44.217944    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:44.229694    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:44.229712    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:44.229718    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:44.271413    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:44.271431    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:44.289486    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:44.289501    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:44.302783    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:44.302800    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:44.325003    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:44.325011    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:44.362244    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:44.362251    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:44.398541    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:44.398552    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:44.416809    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:44.416822    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:44.429055    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:44.429065    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:44.440498    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:44.440508    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:44.452193    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:44.452208    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:44.467013    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:44.467023    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:44.478459    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:44.478470    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:44.489421    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:44.489432    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:44.504184    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:44.504194    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:44.518717    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:44.518730    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:44.522633    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:44.522640    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:47.037001    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:47.643329    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:47.643552    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:47.658823    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:47.658915    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:47.670173    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:47.670260    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:47.680881    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:47.680962    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:47.693033    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:47.693116    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:47.703925    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:47.704005    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:47.714505    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:47.714596    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:47.735830    9103 logs.go:276] 0 containers: []
	W0923 03:39:47.735842    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:47.735916    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:47.746370    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:47.746388    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:47.746394    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:47.760876    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:47.760887    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:47.801443    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:47.801459    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:47.806458    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:47.806464    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:47.828416    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:47.828427    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:47.850525    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:47.850538    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:47.862097    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:47.862108    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:47.880398    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:47.880409    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:47.892373    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:47.892383    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:47.904630    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:47.904643    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:47.916623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:47.916635    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:47.928314    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:47.928327    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:47.948365    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:47.948374    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:47.971764    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:47.971771    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:48.031862    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:48.031898    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:52.039748    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:52.040042    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:52.070659    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:52.070809    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:52.091308    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:52.091409    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:52.106642    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:52.106727    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:52.118300    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:52.118387    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:52.129399    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:52.129480    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:52.140584    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:52.140664    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:52.151153    9267 logs.go:276] 0 containers: []
	W0923 03:39:52.151165    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:52.151230    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:52.162122    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:52.162139    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:52.162145    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:52.176572    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:52.176583    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:52.216741    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:52.216755    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:52.251832    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:52.251843    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:52.274256    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:52.274262    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:52.285621    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:52.285632    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:52.323686    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:52.323697    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:52.335155    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:52.335165    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:52.349654    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:52.349669    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:52.366750    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:52.366761    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:52.383288    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:52.383298    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:52.395290    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:52.395301    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:52.400075    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:52.400083    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:52.414426    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:52.414441    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:52.428964    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:52.428975    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:52.440216    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:52.440228    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:52.452177    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:52.452187    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:50.548531    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:54.968958    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:55.550861    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:55.551072    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:55.575859    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:39:55.576002    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:55.592670    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:39:55.592766    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:55.605750    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:39:55.605843    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:55.617104    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:39:55.617186    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:55.627405    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:39:55.627489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:55.642092    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:39:55.642174    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:55.654045    9103 logs.go:276] 0 containers: []
	W0923 03:39:55.654056    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:55.654124    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:55.664834    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:39:55.664850    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:55.664855    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:55.701425    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:55.701434    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:55.705796    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:39:55.705806    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:39:55.719490    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:39:55.719503    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:39:55.734471    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:39:55.734484    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:39:55.751871    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:55.751883    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:55.775463    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:55.775476    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:55.810717    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:39:55.810733    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:39:55.825567    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:39:55.825582    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:39:55.837264    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:39:55.837279    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:39:55.848518    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:39:55.848533    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:39:55.859959    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:39:55.859974    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:39:55.874237    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:39:55.874247    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:55.886460    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:39:55.886475    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:39:55.898639    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:39:55.898654    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:39:59.971299    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:59.971472    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:59.984475    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:59.984590    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:59.998366    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:59.998557    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:00.009396    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:40:00.009479    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:00.020245    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:40:00.020327    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:00.030975    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:40:00.031047    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:00.041914    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:40:00.041998    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:00.052114    9267 logs.go:276] 0 containers: []
	W0923 03:40:00.052127    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:00.052203    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:00.062597    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:40:00.062615    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:40:00.062620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:40:00.073846    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:40:00.073858    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:40:00.085610    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:40:00.085619    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:40:00.099890    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:40:00.099903    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:40:00.111186    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:40:00.111198    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:40:00.124546    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:40:00.124559    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:40:00.163411    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:40:00.163425    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:40:00.175480    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:40:00.175491    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:40:00.192448    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:40:00.192464    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:00.206243    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:00.206256    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:00.210532    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:40:00.210541    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:40:00.221950    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:40:00.221961    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:40:00.236479    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:00.236490    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:00.271541    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:40:00.271552    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:40:00.286668    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:40:00.286678    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:40:00.308891    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:00.308905    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:00.330614    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:00.330621    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:02.870019    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:58.415547    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:07.872152    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:07.872298    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:07.884678    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:40:07.884774    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:07.900235    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:40:07.900312    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:07.912115    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:40:07.912193    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:07.922843    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:40:07.922916    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:07.933642    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:40:07.933716    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:07.944280    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:40:07.944362    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:07.954128    9267 logs.go:276] 0 containers: []
	W0923 03:40:07.954140    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:07.954208    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:07.964012    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:40:07.964031    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:40:07.964036    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:40:07.977848    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:40:07.977857    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:40:07.989669    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:07.989680    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:08.010887    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:40:08.010896    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:03.417222    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:03.417391    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:03.432438    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:03.432538    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:03.444181    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:03.444267    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:03.454927    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:03.455006    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:03.465606    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:03.465680    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:03.476070    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:03.476138    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:03.486408    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:03.486476    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:03.500826    9103 logs.go:276] 0 containers: []
	W0923 03:40:03.500836    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:03.500899    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:03.511542    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:03.511560    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:03.511566    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:03.531312    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:03.531323    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:03.549717    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:03.549729    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:03.561871    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:03.561881    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:03.573050    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:03.573062    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:03.593277    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:03.593289    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:03.605194    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:03.605205    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:03.622455    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:03.622469    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:03.633796    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:03.633808    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:03.645798    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:03.645810    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:03.682283    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:03.682294    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:03.716990    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:03.717001    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:03.730664    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:03.730673    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:03.735165    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:03.735172    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:03.747264    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:03.747278    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:06.274123    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:08.022410    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:08.022421    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:08.026643    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:08.026649    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:08.062529    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:40:08.062545    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:40:08.101561    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:40:08.101572    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:40:08.115762    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:40:08.115772    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:40:08.127600    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:40:08.127611    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:40:08.139480    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:40:08.139490    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:40:08.153808    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:40:08.153819    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:40:08.169498    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:08.169509    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:08.209255    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:40:08.209273    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:40:08.221386    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:40:08.221399    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:40:08.243514    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:40:08.243528    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:40:08.260275    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:40:08.260288    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:40:10.773594    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:11.274608    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:11.275019    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:11.307924    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:11.308078    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:11.328365    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:11.328495    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:11.342714    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:11.342805    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:11.354723    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:11.354796    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:11.365895    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:11.365973    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:11.376046    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:11.376122    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:11.386296    9103 logs.go:276] 0 containers: []
	W0923 03:40:11.386312    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:11.386372    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:11.396681    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:11.396699    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:11.396704    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:11.410475    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:11.410486    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:11.422261    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:11.422277    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:11.433847    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:11.433856    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:11.445875    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:11.445886    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:11.458483    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:11.458500    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:11.472709    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:11.472717    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:11.497025    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:11.497036    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:11.536493    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:11.536502    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:11.548744    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:11.548758    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:11.560288    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:11.560304    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:11.577912    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:11.577925    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:11.616191    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:11.616199    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:11.620408    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:11.620413    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:11.635996    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:11.636007    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:15.776165    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:15.776283    9267 kubeadm.go:597] duration metric: took 4m3.831458375s to restartPrimaryControlPlane
	W0923 03:40:15.776370    9267 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 03:40:15.776411    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 03:40:16.830709    9267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054307709s)
	I0923 03:40:16.830800    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 03:40:16.835772    9267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:40:16.838857    9267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:40:16.841756    9267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:40:16.841763    9267 kubeadm.go:157] found existing configuration files:
	
	I0923 03:40:16.841791    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf
	I0923 03:40:16.844205    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:40:16.844233    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:40:16.847640    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf
	I0923 03:40:16.850689    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:40:16.850714    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:40:16.853689    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf
	I0923 03:40:16.856090    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:40:16.856119    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:40:16.859136    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf
	I0923 03:40:16.862030    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:40:16.862056    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:40:16.864501    9267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 03:40:16.882024    9267 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 03:40:16.882052    9267 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 03:40:16.930683    9267 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 03:40:16.930773    9267 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 03:40:16.930819    9267 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 03:40:16.986863    9267 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 03:40:16.990141    9267 out.go:235]   - Generating certificates and keys ...
	I0923 03:40:16.990182    9267 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 03:40:16.990219    9267 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 03:40:16.990255    9267 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 03:40:16.990289    9267 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 03:40:16.990345    9267 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 03:40:16.990376    9267 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 03:40:16.990406    9267 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 03:40:16.990472    9267 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 03:40:16.990506    9267 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 03:40:16.990550    9267 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 03:40:16.990573    9267 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 03:40:16.990605    9267 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 03:40:17.105146    9267 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 03:40:17.238106    9267 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 03:40:17.432813    9267 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 03:40:17.517074    9267 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 03:40:17.546497    9267 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 03:40:17.547649    9267 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 03:40:17.547671    9267 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 03:40:17.612223    9267 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 03:40:17.616399    9267 out.go:235]   - Booting up control plane ...
	I0923 03:40:17.616449    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 03:40:17.616506    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 03:40:17.616559    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 03:40:17.616622    9267 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 03:40:17.616821    9267 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 03:40:14.149603    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:22.117416    9267 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502135 seconds
	I0923 03:40:22.117474    9267 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 03:40:22.121103    9267 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 03:40:22.636753    9267 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 03:40:22.637027    9267 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-516000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 03:40:23.141969    9267 kubeadm.go:310] [bootstrap-token] Using token: 40qyrs.ydvwxghv2sden5ot
	I0923 03:40:19.151724    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:19.151838    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:19.163161    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:19.163246    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:19.174542    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:19.174624    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:19.186427    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:19.186519    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:19.208627    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:19.208713    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:19.220434    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:19.220511    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:19.231433    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:19.231518    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:19.243250    9103 logs.go:276] 0 containers: []
	W0923 03:40:19.243262    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:19.243331    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:19.257761    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:19.257781    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:19.257787    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:19.298047    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:19.298058    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:19.310908    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:19.310921    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:19.327706    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:19.327722    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:19.341084    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:19.341094    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:19.345866    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:19.345874    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:19.359944    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:19.359960    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:19.372623    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:19.372638    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:19.384845    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:19.384857    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:19.403010    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:19.403022    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:19.415841    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:19.415852    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:19.440973    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:19.440983    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:19.479612    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:19.479624    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:19.494046    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:19.494056    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:19.506295    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:19.506308    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:22.024018    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:23.144497    9267 out.go:235]   - Configuring RBAC rules ...
	I0923 03:40:23.144551    9267 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 03:40:23.144596    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 03:40:23.148171    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 03:40:23.149154    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 03:40:23.150200    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 03:40:23.151032    9267 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 03:40:23.154398    9267 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 03:40:23.327810    9267 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 03:40:23.545967    9267 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 03:40:23.546455    9267 kubeadm.go:310] 
	I0923 03:40:23.546493    9267 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 03:40:23.546496    9267 kubeadm.go:310] 
	I0923 03:40:23.546535    9267 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 03:40:23.546538    9267 kubeadm.go:310] 
	I0923 03:40:23.546554    9267 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 03:40:23.546586    9267 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 03:40:23.546609    9267 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 03:40:23.546616    9267 kubeadm.go:310] 
	I0923 03:40:23.546661    9267 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 03:40:23.546668    9267 kubeadm.go:310] 
	I0923 03:40:23.546695    9267 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 03:40:23.546700    9267 kubeadm.go:310] 
	I0923 03:40:23.546730    9267 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 03:40:23.546776    9267 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 03:40:23.546820    9267 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 03:40:23.546824    9267 kubeadm.go:310] 
	I0923 03:40:23.546872    9267 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 03:40:23.546910    9267 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 03:40:23.546913    9267 kubeadm.go:310] 
	I0923 03:40:23.546958    9267 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 40qyrs.ydvwxghv2sden5ot \
	I0923 03:40:23.547015    9267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f \
	I0923 03:40:23.547027    9267 kubeadm.go:310] 	--control-plane 
	I0923 03:40:23.547031    9267 kubeadm.go:310] 
	I0923 03:40:23.547081    9267 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 03:40:23.547084    9267 kubeadm.go:310] 
	I0923 03:40:23.547133    9267 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 40qyrs.ydvwxghv2sden5ot \
	I0923 03:40:23.547200    9267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f 
	I0923 03:40:23.547425    9267 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 03:40:23.547440    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:40:23.547451    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:40:23.551419    9267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 03:40:23.554445    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 03:40:23.557607    9267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 03:40:23.563523    9267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 03:40:23.563588    9267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 03:40:23.563613    9267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-516000 minikube.k8s.io/updated_at=2024_09_23T03_40_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=stopped-upgrade-516000 minikube.k8s.io/primary=true
	I0923 03:40:23.610046    9267 ops.go:34] apiserver oom_adj: -16
	I0923 03:40:23.610192    9267 kubeadm.go:1113] duration metric: took 46.662792ms to wait for elevateKubeSystemPrivileges
	I0923 03:40:23.610203    9267 kubeadm.go:394] duration metric: took 4m11.678443875s to StartCluster
	I0923 03:40:23.610212    9267 settings.go:142] acquiring lock: {Name:mk179b7e7e669ed9fc071f7eb5301e91538a634e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:40:23.610311    9267 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:40:23.610748    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:40:23.610959    9267 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:40:23.611011    9267 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 03:40:23.611089    9267 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-516000"
	I0923 03:40:23.611099    9267 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-516000"
	I0923 03:40:23.611099    9267 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-516000"
	I0923 03:40:23.611109    9267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-516000"
	W0923 03:40:23.611102    9267 addons.go:243] addon storage-provisioner should already be in state true
	I0923 03:40:23.611138    9267 host.go:66] Checking if "stopped-upgrade-516000" exists ...
	I0923 03:40:23.611230    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:40:23.612319    9267 kapi.go:59] client config for stopped-upgrade-516000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10675a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:40:23.612447    9267 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-516000"
	W0923 03:40:23.612452    9267 addons.go:243] addon default-storageclass should already be in state true
	I0923 03:40:23.612460    9267 host.go:66] Checking if "stopped-upgrade-516000" exists ...
	I0923 03:40:23.615386    9267 out.go:177] * Verifying Kubernetes components...
	I0923 03:40:23.615799    9267 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 03:40:23.618598    9267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 03:40:23.618606    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:40:23.621378    9267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:40:23.625382    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:40:23.631384    9267 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:40:23.631392    9267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 03:40:23.631400    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:40:23.703228    9267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:40:23.708919    9267 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:40:23.708966    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:40:23.712527    9267 api_server.go:72] duration metric: took 101.558208ms to wait for apiserver process to appear ...
	I0923 03:40:23.712535    9267 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:40:23.712542    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:23.725112    9267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:40:23.764823    9267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 03:40:24.095165    9267 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 03:40:24.095176    9267 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 03:40:27.026112    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:27.026257    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:27.039480    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:27.039563    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:27.050107    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:27.050188    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:27.060613    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:27.060688    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:27.071589    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:27.071671    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:27.082056    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:27.082132    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:27.092348    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:27.092432    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:27.102760    9103 logs.go:276] 0 containers: []
	W0923 03:40:27.102770    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:27.102831    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:27.113445    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:27.113464    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:27.113469    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:27.128122    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:27.128134    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:27.141781    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:27.141791    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:27.153375    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:27.153389    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:27.190711    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:27.190720    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:27.195706    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:27.195714    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:27.207322    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:27.207336    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:27.218926    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:27.218936    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:27.243310    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:27.243318    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:27.254784    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:27.254799    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:27.291186    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:27.291200    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:27.303380    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:27.303393    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:27.314833    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:27.314847    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:27.329840    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:27.329853    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:27.341732    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:27.341746    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:28.714544    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:28.714597    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:29.861465    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:33.714811    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:33.714853    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:34.863448    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:34.863573    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:34.878130    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:34.878214    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:34.888406    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:34.888489    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:34.899096    9103 logs.go:276] 4 containers: [0752b17b0c08 41cccde2068e 5173168dcb78 a9c407fbfbed]
	I0923 03:40:34.899181    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:34.911700    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:34.911768    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:34.925779    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:34.925856    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:34.936098    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:34.936179    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:34.950858    9103 logs.go:276] 0 containers: []
	W0923 03:40:34.950875    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:34.950943    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:34.961243    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:34.961262    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:34.961269    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:34.966389    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:34.966397    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:35.001306    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:35.001317    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:35.014212    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:35.014224    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:35.026079    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:35.026094    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:35.062625    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:35.062632    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:35.077151    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:35.077162    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:35.088530    9103 logs.go:123] Gathering logs for coredns [5173168dcb78] ...
	I0923 03:40:35.088546    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5173168dcb78"
	I0923 03:40:35.100656    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:35.100669    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:35.125422    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:35.125432    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:35.141304    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:35.141315    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:35.153418    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:35.153429    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:35.168086    9103 logs.go:123] Gathering logs for coredns [a9c407fbfbed] ...
	I0923 03:40:35.168096    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c407fbfbed"
	I0923 03:40:35.180419    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:35.180435    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:35.199026    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:35.199037    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:37.712875    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:38.715123    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:38.715148    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:42.714973    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:42.715246    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:42.731857    9103 logs.go:276] 1 containers: [9f74a13a312f]
	I0923 03:40:42.731957    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:42.744659    9103 logs.go:276] 1 containers: [44fb59581d75]
	I0923 03:40:42.744741    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:42.755876    9103 logs.go:276] 4 containers: [1aef8dd622dc cfb21961ef92 0752b17b0c08 41cccde2068e]
	I0923 03:40:42.755961    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:42.766382    9103 logs.go:276] 1 containers: [50107d377d1f]
	I0923 03:40:42.766455    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:42.776556    9103 logs.go:276] 1 containers: [ab05e4b20bee]
	I0923 03:40:42.776637    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:42.786975    9103 logs.go:276] 1 containers: [c171f7fafe13]
	I0923 03:40:42.787053    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:42.796726    9103 logs.go:276] 0 containers: []
	W0923 03:40:42.796737    9103 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:42.796803    9103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:42.807645    9103 logs.go:276] 1 containers: [7010c94e436d]
	I0923 03:40:42.807666    9103 logs.go:123] Gathering logs for coredns [0752b17b0c08] ...
	I0923 03:40:42.807671    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0752b17b0c08"
	I0923 03:40:42.819740    9103 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:42.819750    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:42.842496    9103 logs.go:123] Gathering logs for container status ...
	I0923 03:40:42.842506    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:42.854007    9103 logs.go:123] Gathering logs for coredns [1aef8dd622dc] ...
	I0923 03:40:42.854017    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aef8dd622dc"
	I0923 03:40:42.865365    9103 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:42.865378    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:42.870331    9103 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:42.870337    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:42.905028    9103 logs.go:123] Gathering logs for kube-apiserver [9f74a13a312f] ...
	I0923 03:40:42.905040    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f74a13a312f"
	I0923 03:40:42.919468    9103 logs.go:123] Gathering logs for coredns [cfb21961ef92] ...
	I0923 03:40:42.919481    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb21961ef92"
	I0923 03:40:42.930783    9103 logs.go:123] Gathering logs for storage-provisioner [7010c94e436d] ...
	I0923 03:40:42.930797    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7010c94e436d"
	I0923 03:40:42.942342    9103 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:42.942357    9103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:42.980410    9103 logs.go:123] Gathering logs for kube-proxy [ab05e4b20bee] ...
	I0923 03:40:42.980418    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab05e4b20bee"
	I0923 03:40:42.992196    9103 logs.go:123] Gathering logs for kube-controller-manager [c171f7fafe13] ...
	I0923 03:40:42.992210    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c171f7fafe13"
	I0923 03:40:43.010602    9103 logs.go:123] Gathering logs for coredns [41cccde2068e] ...
	I0923 03:40:43.010617    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cccde2068e"
	I0923 03:40:43.022842    9103 logs.go:123] Gathering logs for kube-scheduler [50107d377d1f] ...
	I0923 03:40:43.022852    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50107d377d1f"
	I0923 03:40:43.038009    9103 logs.go:123] Gathering logs for etcd [44fb59581d75] ...
	I0923 03:40:43.038022    9103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44fb59581d75"
	I0923 03:40:43.715930    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:43.715960    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:45.563837    9103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:50.564884    9103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:50.570480    9103 out.go:201] 
	W0923 03:40:50.575473    9103 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 03:40:50.575485    9103 out.go:270] * 
	W0923 03:40:50.576315    9103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:40:50.590400    9103 out.go:201] 
	I0923 03:40:48.716473    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:48.716516    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:53.717282    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:53.717330    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 03:40:54.096772    9267 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 03:40:54.101645    9267 out.go:177] * Enabled addons: storage-provisioner
	I0923 03:40:54.117565    9267 addons.go:510] duration metric: took 30.507246625s for enable addons: enabled=[storage-provisioner]
	I0923 03:40:58.718367    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:58.718406    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-09-23 10:32:00 UTC, ends at Mon 2024-09-23 10:41:06 UTC. --
	Sep 23 10:40:43 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 10:40:48 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 10:40:51 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:51Z" level=error msg="ContainerStats resp: {0x40008ff980 linux}"
	Sep 23 10:40:51 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:51Z" level=error msg="ContainerStats resp: {0x40008ffd40 linux}"
	Sep 23 10:40:52 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:52Z" level=error msg="ContainerStats resp: {0x4000584600 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40003afbc0 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x4000585780 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40005f20c0 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40005f2200 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40005f2980 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40005f2dc0 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=error msg="ContainerStats resp: {0x40005f2f00 linux}"
	Sep 23 10:40:53 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 10:40:58 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:40:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 10:41:03 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:03Z" level=error msg="ContainerStats resp: {0x400076e940 linux}"
	Sep 23 10:41:03 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:03Z" level=error msg="ContainerStats resp: {0x40008fee40 linux}"
	Sep 23 10:41:03 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 10:41:04 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:04Z" level=error msg="ContainerStats resp: {0x4000584200 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40003affc0 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40008a2340 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x4000585a80 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40003a1340 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40008a28c0 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40004fe740 linux}"
	Sep 23 10:41:05 running-upgrade-515000 cri-dockerd[2991]: time="2024-09-23T10:41:05Z" level=error msg="ContainerStats resp: {0x40008a2380 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1aef8dd622dcc       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   1eb4b921033e2
	cfb21961ef92b       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   894a2264a8944
	0752b17b0c08e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   894a2264a8944
	41cccde2068ea       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1eb4b921033e2
	ab05e4b20bee2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   0d9e2a61b7831
	7010c94e436db       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   97d0bdba4f2bd
	44fb59581d754       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   205d69b84bb31
	50107d377d1f6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   10de22e9e3314
	c171f7fafe133       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   5c5e3a07a7629
	9f74a13a312fe       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   40c9a060cea7d
	
	
	==> coredns [0752b17b0c08] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:47206->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:56373->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:48067->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:45663->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:60932->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:46839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:52520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:39906->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:47678->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6365497420435991300.384621726975943908. HINFO: read udp 10.244.0.2:42832->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1aef8dd622dc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:60212->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:59958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:37467->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:53290->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:40913->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:57547->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4346793032821162557.4550328601642469267. HINFO: read udp 10.244.0.3:39129->10.0.2.3:53: i/o timeout
	
	
	==> coredns [41cccde2068e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:37594->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:39616->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:52120->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:40397->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:51178->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:59567->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:43687->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:49047->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:42868->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 838304112712009563.2636470533025908526. HINFO: read udp 10.244.0.3:41288->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cfb21961ef92] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:47898->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:59690->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:37536->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:60009->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:44601->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:39109->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2396362833121572728.3454658967107201622. HINFO: read udp 10.244.0.2:42218->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-515000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-515000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=running-upgrade-515000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T03_36_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:36:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-515000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:41:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:36:49 +0000   Mon, 23 Sep 2024 10:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:36:49 +0000   Mon, 23 Sep 2024 10:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:36:49 +0000   Mon, 23 Sep 2024 10:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:36:49 +0000   Mon, 23 Sep 2024 10:36:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-515000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 28684baf4bf84f9b85a857bc66a1546d
	  System UUID:                28684baf4bf84f9b85a857bc66a1546d
	  Boot ID:                    cf6eaf92-1e6e-4c0d-be3b-bf03a09dfb2f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6x5d7                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-xqmqh                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-515000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-515000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-515000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-jzr58                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-515000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-515000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-515000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-515000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-515000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-515000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-515000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-515000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-515000 event: Registered Node running-upgrade-515000 in Controller
	
	
	==> dmesg <==
	[  +1.920404] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.083187] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.076400] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.139483] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088824] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.079702] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.217310] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +9.690033] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.477216] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +0.191541] systemd-fstab-generator[2239]: Ignoring "noauto" for root device
	[  +0.095212] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.086284] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[  +2.747580] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.188117] systemd-fstab-generator[2948]: Ignoring "noauto" for root device
	[  +0.088655] systemd-fstab-generator[2959]: Ignoring "noauto" for root device
	[  +0.080744] systemd-fstab-generator[2970]: Ignoring "noauto" for root device
	[  +0.093141] systemd-fstab-generator[2984]: Ignoring "noauto" for root device
	[  +2.320064] systemd-fstab-generator[3140]: Ignoring "noauto" for root device
	[  +3.293438] systemd-fstab-generator[3539]: Ignoring "noauto" for root device
	[  +1.208814] systemd-fstab-generator[3831]: Ignoring "noauto" for root device
	[ +19.692016] kauditd_printk_skb: 68 callbacks suppressed
	[Sep23 10:36] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.256548] systemd-fstab-generator[11515]: Ignoring "noauto" for root device
	[  +5.122367] systemd-fstab-generator[12104]: Ignoring "noauto" for root device
	[  +0.487118] systemd-fstab-generator[12240]: Ignoring "noauto" for root device
	
	
	==> etcd [44fb59581d75] <==
	{"level":"info","ts":"2024-09-23T10:36:45.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-23T10:36:45.708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-23T10:36:45.717Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T10:36:45.733Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-23T10:36:45.733Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-23T10:36:45.733Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T10:36:45.733Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-515000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:36:45.792Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:45.793Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-23T10:36:45.793Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:45.795Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:45.795Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:36:45.799Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:45.799Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:45.799Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:45.799Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:45.799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:41:07 up 9 min,  0 users,  load average: 0.39, 0.20, 0.11
	Linux running-upgrade-515000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9f74a13a312f] <==
	I0923 10:36:47.319016       1 cache.go:39] Caches are synced for autoregister controller
	I0923 10:36:47.321259       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0923 10:36:47.322125       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 10:36:47.322324       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0923 10:36:47.322406       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0923 10:36:47.322535       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0923 10:36:47.328229       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0923 10:36:48.064780       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0923 10:36:48.224931       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0923 10:36:48.229844       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0923 10:36:48.229873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 10:36:48.367803       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 10:36:48.380193       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 10:36:48.487884       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0923 10:36:48.489904       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0923 10:36:48.490293       1 controller.go:611] quota admission added evaluator for: endpoints
	I0923 10:36:48.491754       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 10:36:49.351510       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0923 10:36:49.637242       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0923 10:36:49.640805       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0923 10:36:49.653674       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0923 10:36:49.687006       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 10:37:03.057519       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0923 10:37:03.107745       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0923 10:37:03.670859       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c171f7fafe13] <==
	I0923 10:37:02.372684       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0923 10:37:02.372704       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-515000. Assuming now as a timestamp.
	I0923 10:37:02.372724       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0923 10:37:02.372894       1 event.go:294] "Event occurred" object="running-upgrade-515000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-515000 event: Registered Node running-upgrade-515000 in Controller"
	I0923 10:37:02.372945       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0923 10:37:02.374919       1 shared_informer.go:262] Caches are synced for persistent volume
	I0923 10:37:02.400913       1 shared_informer.go:262] Caches are synced for job
	I0923 10:37:02.402079       1 shared_informer.go:262] Caches are synced for stateful set
	I0923 10:37:02.402101       1 shared_informer.go:262] Caches are synced for HPA
	I0923 10:37:02.406276       1 shared_informer.go:262] Caches are synced for attach detach
	I0923 10:37:02.406891       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 10:37:02.453348       1 shared_informer.go:262] Caches are synced for GC
	I0923 10:37:02.453354       1 shared_informer.go:262] Caches are synced for daemon sets
	I0923 10:37:02.453359       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0923 10:37:02.453368       1 shared_informer.go:262] Caches are synced for PVC protection
	I0923 10:37:02.455406       1 shared_informer.go:262] Caches are synced for disruption
	I0923 10:37:02.455416       1 disruption.go:371] Sending events to api server.
	I0923 10:37:02.457956       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 10:37:02.823364       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 10:37:02.902797       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 10:37:02.902822       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 10:37:03.058734       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0923 10:37:03.110282       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jzr58"
	I0923 10:37:03.208419       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6x5d7"
	I0923 10:37:03.220284       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xqmqh"
	
	
	==> kube-proxy [ab05e4b20bee] <==
	I0923 10:37:03.651477       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0923 10:37:03.651502       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0923 10:37:03.651512       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0923 10:37:03.669017       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0923 10:37:03.669057       1 server_others.go:206] "Using iptables Proxier"
	I0923 10:37:03.669067       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0923 10:37:03.669211       1 server.go:661] "Version info" version="v1.24.1"
	I0923 10:37:03.669219       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:37:03.669597       1 config.go:444] "Starting node config controller"
	I0923 10:37:03.669606       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0923 10:37:03.669615       1 config.go:317] "Starting service config controller"
	I0923 10:37:03.669616       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0923 10:37:03.669622       1 config.go:226] "Starting endpoint slice config controller"
	I0923 10:37:03.669624       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0923 10:37:03.770355       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0923 10:37:03.770369       1 shared_informer.go:262] Caches are synced for service config
	I0923 10:37:03.770355       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [50107d377d1f] <==
	W0923 10:36:47.277596       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:47.277616       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0923 10:36:47.277631       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:36:47.277657       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 10:36:47.277684       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:47.277691       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0923 10:36:47.277661       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:47.277697       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0923 10:36:47.277673       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:36:47.277722       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0923 10:36:47.277775       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:36:47.277795       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0923 10:36:47.277647       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:36:47.277827       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0923 10:36:47.277878       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:36:47.277898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0923 10:36:48.150450       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:36:48.150519       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0923 10:36:48.255702       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:36:48.255842       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0923 10:36:48.285784       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:48.285804       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0923 10:36:48.329020       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:48.329107       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0923 10:36:48.773188       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-09-23 10:32:00 UTC, ends at Mon 2024-09-23 10:41:07 UTC. --
	Sep 23 10:36:50 running-upgrade-515000 kubelet[12110]: I0923 10:36:50.883689   12110 reconciler.go:157] "Reconciler: start to sync state"
	Sep 23 10:36:51 running-upgrade-515000 kubelet[12110]: E0923 10:36:51.264890   12110 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-515000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-515000"
	Sep 23 10:36:51 running-upgrade-515000 kubelet[12110]: E0923 10:36:51.464473   12110 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-515000\" already exists" pod="kube-system/etcd-running-upgrade-515000"
	Sep 23 10:36:51 running-upgrade-515000 kubelet[12110]: E0923 10:36:51.665274   12110 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-515000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-515000"
	Sep 23 10:36:51 running-upgrade-515000 kubelet[12110]: I0923 10:36:51.862214   12110 request.go:601] Waited for 1.149366786s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 23 10:36:51 running-upgrade-515000 kubelet[12110]: E0923 10:36:51.864733   12110 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-515000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-515000"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.181859   12110 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.182348   12110 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.378236   12110 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.486259   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6fa9dcd9-4210-46b7-9128-3150adde7fe9-tmp\") pod \"storage-provisioner\" (UID: \"6fa9dcd9-4210-46b7-9128-3150adde7fe9\") " pod="kube-system/storage-provisioner"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.486289   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2grw\" (UniqueName: \"kubernetes.io/projected/6fa9dcd9-4210-46b7-9128-3150adde7fe9-kube-api-access-f2grw\") pod \"storage-provisioner\" (UID: \"6fa9dcd9-4210-46b7-9128-3150adde7fe9\") " pod="kube-system/storage-provisioner"
	Sep 23 10:37:02 running-upgrade-515000 kubelet[12110]: I0923 10:37:02.802611   12110 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="97d0bdba4f2bd4b92bacfe76cc150753b4394067c6c4342e9a1811b5e5247cbb"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.112446   12110 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.211432   12110 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.223275   12110 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.291129   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fde7fe0b-7b01-4080-a607-02f4f6248b6b-kube-proxy\") pod \"kube-proxy-jzr58\" (UID: \"fde7fe0b-7b01-4080-a607-02f4f6248b6b\") " pod="kube-system/kube-proxy-jzr58"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.291153   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fde7fe0b-7b01-4080-a607-02f4f6248b6b-lib-modules\") pod \"kube-proxy-jzr58\" (UID: \"fde7fe0b-7b01-4080-a607-02f4f6248b6b\") " pod="kube-system/kube-proxy-jzr58"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.291164   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fde7fe0b-7b01-4080-a607-02f4f6248b6b-xtables-lock\") pod \"kube-proxy-jzr58\" (UID: \"fde7fe0b-7b01-4080-a607-02f4f6248b6b\") " pod="kube-system/kube-proxy-jzr58"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.291175   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54btv\" (UniqueName: \"kubernetes.io/projected/fde7fe0b-7b01-4080-a607-02f4f6248b6b-kube-api-access-54btv\") pod \"kube-proxy-jzr58\" (UID: \"fde7fe0b-7b01-4080-a607-02f4f6248b6b\") " pod="kube-system/kube-proxy-jzr58"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.392219   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9rcq\" (UniqueName: \"kubernetes.io/projected/1145f369-dc7a-41d7-8f66-3495963fc736-kube-api-access-b9rcq\") pod \"coredns-6d4b75cb6d-6x5d7\" (UID: \"1145f369-dc7a-41d7-8f66-3495963fc736\") " pod="kube-system/coredns-6d4b75cb6d-6x5d7"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.392258   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1145f369-dc7a-41d7-8f66-3495963fc736-config-volume\") pod \"coredns-6d4b75cb6d-6x5d7\" (UID: \"1145f369-dc7a-41d7-8f66-3495963fc736\") " pod="kube-system/coredns-6d4b75cb6d-6x5d7"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.392282   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlb5\" (UniqueName: \"kubernetes.io/projected/8d5510d8-2764-4c71-97a7-b01375fe4acb-kube-api-access-fzlb5\") pod \"coredns-6d4b75cb6d-xqmqh\" (UID: \"8d5510d8-2764-4c71-97a7-b01375fe4acb\") " pod="kube-system/coredns-6d4b75cb6d-xqmqh"
	Sep 23 10:37:03 running-upgrade-515000 kubelet[12110]: I0923 10:37:03.392303   12110 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d5510d8-2764-4c71-97a7-b01375fe4acb-config-volume\") pod \"coredns-6d4b75cb6d-xqmqh\" (UID: \"8d5510d8-2764-4c71-97a7-b01375fe4acb\") " pod="kube-system/coredns-6d4b75cb6d-xqmqh"
	Sep 23 10:40:42 running-upgrade-515000 kubelet[12110]: I0923 10:40:42.199237   12110 scope.go:110] "RemoveContainer" containerID="5173168dcb78c3f01e587ac11ed8c8bd9017324268b5afc197c61305f7d3910b"
	Sep 23 10:40:42 running-upgrade-515000 kubelet[12110]: I0923 10:40:42.211570   12110 scope.go:110] "RemoveContainer" containerID="a9c407fbfbedfb6a5d1a8f18fd91a3304fe463d4f624f803c6d83dde757fb868"
	
	
	==> storage-provisioner [7010c94e436d] <==
	I0923 10:37:02.859376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:37:02.863935       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:37:02.863954       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:37:02.866753       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:37:02.866820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-515000_e4c8738d-985b-4399-b003-8f76b619181c!
	I0923 10:37:02.867192       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66e8a3d3-6709-4d00-b83f-3d2c275137db", APIVersion:"v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-515000_e4c8738d-985b-4399-b003-8f76b619181c became leader
	I0923 10:37:02.967563       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-515000_e4c8738d-985b-4399-b003-8f76b619181c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-515000 -n running-upgrade-515000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-515000 -n running-upgrade-515000: exit status 2 (15.687818166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-515000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-515000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-515000
--- FAIL: TestRunningBinaryUpgrade (593.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.863770625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-915000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-915000" primary control-plane node in "kubernetes-upgrade-915000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-915000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:34:30.052387    9176 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:34:30.052530    9176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:34:30.052534    9176 out.go:358] Setting ErrFile to fd 2...
	I0923 03:34:30.052536    9176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:34:30.052670    9176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:34:30.053740    9176 out.go:352] Setting JSON to false
	I0923 03:34:30.070142    9176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5641,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:34:30.070261    9176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:34:30.074964    9176 out.go:177] * [kubernetes-upgrade-915000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:34:30.078865    9176 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:34:30.078971    9176 notify.go:220] Checking for updates...
	I0923 03:34:30.085813    9176 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:34:30.088906    9176 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:34:30.092734    9176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:34:30.096836    9176 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:34:30.099861    9176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:34:30.101542    9176 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:34:30.101605    9176 config.go:182] Loaded profile config "running-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:34:30.101649    9176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:34:30.105866    9176 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:34:30.112829    9176 start.go:297] selected driver: qemu2
	I0923 03:34:30.112835    9176 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:34:30.112842    9176 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:34:30.115113    9176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:34:30.117856    9176 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:34:30.120948    9176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:34:30.120969    9176 cni.go:84] Creating CNI manager for ""
	I0923 03:34:30.121001    9176 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 03:34:30.121045    9176 start.go:340] cluster config:
	{Name:kubernetes-upgrade-915000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:34:30.124508    9176 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:34:30.132845    9176 out.go:177] * Starting "kubernetes-upgrade-915000" primary control-plane node in "kubernetes-upgrade-915000" cluster
	I0923 03:34:30.136836    9176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:34:30.136852    9176 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:34:30.136859    9176 cache.go:56] Caching tarball of preloaded images
	I0923 03:34:30.136923    9176 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:34:30.136929    9176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 03:34:30.136975    9176 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kubernetes-upgrade-915000/config.json ...
	I0923 03:34:30.136985    9176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kubernetes-upgrade-915000/config.json: {Name:mk7603a254a669c4a00e27d80ac61b968a166904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:34:30.137254    9176 start.go:360] acquireMachinesLock for kubernetes-upgrade-915000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:34:30.137286    9176 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "kubernetes-upgrade-915000"
	I0923 03:34:30.137300    9176 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-915000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:34:30.137331    9176 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:34:30.144928    9176 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:34:30.160989    9176 start.go:159] libmachine.API.Create for "kubernetes-upgrade-915000" (driver="qemu2")
	I0923 03:34:30.161018    9176 client.go:168] LocalClient.Create starting
	I0923 03:34:30.161079    9176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:34:30.161112    9176 main.go:141] libmachine: Decoding PEM data...
	I0923 03:34:30.161122    9176 main.go:141] libmachine: Parsing certificate...
	I0923 03:34:30.161155    9176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:34:30.161177    9176 main.go:141] libmachine: Decoding PEM data...
	I0923 03:34:30.161186    9176 main.go:141] libmachine: Parsing certificate...
	I0923 03:34:30.161568    9176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:34:30.349092    9176 main.go:141] libmachine: Creating SSH key...
	I0923 03:34:30.497558    9176 main.go:141] libmachine: Creating Disk image...
	I0923 03:34:30.497566    9176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:34:30.497771    9176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:30.507415    9176 main.go:141] libmachine: STDOUT: 
	I0923 03:34:30.507438    9176 main.go:141] libmachine: STDERR: 
	I0923 03:34:30.507496    9176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2 +20000M
	I0923 03:34:30.515612    9176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:34:30.515628    9176 main.go:141] libmachine: STDERR: 
	I0923 03:34:30.515650    9176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:30.515655    9176 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:34:30.515667    9176 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:34:30.515692    9176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6c:bb:96:b7:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:30.517381    9176 main.go:141] libmachine: STDOUT: 
	I0923 03:34:30.517397    9176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:34:30.517417    9176 client.go:171] duration metric: took 356.399125ms to LocalClient.Create
	I0923 03:34:32.519589    9176 start.go:128] duration metric: took 2.382272041s to createHost
	I0923 03:34:32.519666    9176 start.go:83] releasing machines lock for "kubernetes-upgrade-915000", held for 2.382424416s
	W0923 03:34:32.519781    9176 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:34:32.536202    9176 out.go:177] * Deleting "kubernetes-upgrade-915000" in qemu2 ...
	W0923 03:34:32.567509    9176 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:34:32.567538    9176 start.go:729] Will try again in 5 seconds ...
	I0923 03:34:37.569673    9176 start.go:360] acquireMachinesLock for kubernetes-upgrade-915000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:34:37.570216    9176 start.go:364] duration metric: took 447.625µs to acquireMachinesLock for "kubernetes-upgrade-915000"
	I0923 03:34:37.570332    9176 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-915000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:34:37.570503    9176 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:34:37.580071    9176 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:34:37.619125    9176 start.go:159] libmachine.API.Create for "kubernetes-upgrade-915000" (driver="qemu2")
	I0923 03:34:37.619178    9176 client.go:168] LocalClient.Create starting
	I0923 03:34:37.619295    9176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:34:37.619376    9176 main.go:141] libmachine: Decoding PEM data...
	I0923 03:34:37.619394    9176 main.go:141] libmachine: Parsing certificate...
	I0923 03:34:37.619456    9176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:34:37.619496    9176 main.go:141] libmachine: Decoding PEM data...
	I0923 03:34:37.619507    9176 main.go:141] libmachine: Parsing certificate...
	I0923 03:34:37.620113    9176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:34:37.792845    9176 main.go:141] libmachine: Creating SSH key...
	I0923 03:34:37.826883    9176 main.go:141] libmachine: Creating Disk image...
	I0923 03:34:37.826888    9176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:34:37.827086    9176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:37.836506    9176 main.go:141] libmachine: STDOUT: 
	I0923 03:34:37.836522    9176 main.go:141] libmachine: STDERR: 
	I0923 03:34:37.836601    9176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2 +20000M
	I0923 03:34:37.844650    9176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:34:37.844666    9176 main.go:141] libmachine: STDERR: 
	I0923 03:34:37.844679    9176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:37.844684    9176 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:34:37.844693    9176 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:34:37.844719    9176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:2a:93:69:09:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:37.846518    9176 main.go:141] libmachine: STDOUT: 
	I0923 03:34:37.846533    9176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:34:37.846550    9176 client.go:171] duration metric: took 227.372625ms to LocalClient.Create
	I0923 03:34:39.847186    9176 start.go:128] duration metric: took 2.276636417s to createHost
	I0923 03:34:39.847232    9176 start.go:83] releasing machines lock for "kubernetes-upgrade-915000", held for 2.277041042s
	W0923 03:34:39.847398    9176 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-915000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-915000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:34:39.857820    9176 out.go:201] 
	W0923 03:34:39.865864    9176 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:34:39.865881    9176 out.go:270] * 
	* 
	W0923 03:34:39.867549    9176 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:34:39.880811    9176 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-915000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-915000: (3.583463834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-915000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-915000 status --format={{.Host}}: exit status 7 (55.530167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181976375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-915000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-915000" primary control-plane node in "kubernetes-upgrade-915000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-915000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-915000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:34:43.558459    9216 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:34:43.558611    9216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:34:43.558614    9216 out.go:358] Setting ErrFile to fd 2...
	I0923 03:34:43.558617    9216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:34:43.558743    9216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:34:43.559753    9216 out.go:352] Setting JSON to false
	I0923 03:34:43.576941    9216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5654,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:34:43.577014    9216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:34:43.580934    9216 out.go:177] * [kubernetes-upgrade-915000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:34:43.587892    9216 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:34:43.587967    9216 notify.go:220] Checking for updates...
	I0923 03:34:43.594705    9216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:34:43.598808    9216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:34:43.602884    9216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:34:43.605805    9216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:34:43.608873    9216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:34:43.612100    9216 config.go:182] Loaded profile config "kubernetes-upgrade-915000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 03:34:43.612370    9216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:34:43.614696    9216 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:34:43.621838    9216 start.go:297] selected driver: qemu2
	I0923 03:34:43.621842    9216 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-915000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:34:43.621890    9216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:34:43.624002    9216 cni.go:84] Creating CNI manager for ""
	I0923 03:34:43.624034    9216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:34:43.624053    9216 start.go:340] cluster config:
	{Name:kubernetes-upgrade-915000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-915000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:34:43.627329    9216 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:34:43.635815    9216 out.go:177] * Starting "kubernetes-upgrade-915000" primary control-plane node in "kubernetes-upgrade-915000" cluster
	I0923 03:34:43.639903    9216 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:34:43.639925    9216 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:34:43.639933    9216 cache.go:56] Caching tarball of preloaded images
	I0923 03:34:43.640020    9216 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:34:43.640031    9216 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:34:43.640090    9216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kubernetes-upgrade-915000/config.json ...
	I0923 03:34:43.640550    9216 start.go:360] acquireMachinesLock for kubernetes-upgrade-915000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:34:43.640577    9216 start.go:364] duration metric: took 21µs to acquireMachinesLock for "kubernetes-upgrade-915000"
	I0923 03:34:43.640586    9216 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:34:43.640591    9216 fix.go:54] fixHost starting: 
	I0923 03:34:43.640700    9216 fix.go:112] recreateIfNeeded on kubernetes-upgrade-915000: state=Stopped err=<nil>
	W0923 03:34:43.640708    9216 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:34:43.644888    9216 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-915000" ...
	I0923 03:34:43.652800    9216 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:34:43.652846    9216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:2a:93:69:09:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:43.654992    9216 main.go:141] libmachine: STDOUT: 
	I0923 03:34:43.655010    9216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:34:43.655041    9216 fix.go:56] duration metric: took 14.449291ms for fixHost
	I0923 03:34:43.655045    9216 start.go:83] releasing machines lock for "kubernetes-upgrade-915000", held for 14.464459ms
	W0923 03:34:43.655052    9216 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:34:43.655080    9216 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:34:43.655084    9216 start.go:729] Will try again in 5 seconds ...
	I0923 03:34:48.655349    9216 start.go:360] acquireMachinesLock for kubernetes-upgrade-915000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:34:48.655790    9216 start.go:364] duration metric: took 349.417µs to acquireMachinesLock for "kubernetes-upgrade-915000"
	I0923 03:34:48.655913    9216 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:34:48.655924    9216 fix.go:54] fixHost starting: 
	I0923 03:34:48.656386    9216 fix.go:112] recreateIfNeeded on kubernetes-upgrade-915000: state=Stopped err=<nil>
	W0923 03:34:48.656401    9216 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:34:48.662881    9216 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-915000" ...
	I0923 03:34:48.666898    9216 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:34:48.667059    9216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:2a:93:69:09:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubernetes-upgrade-915000/disk.qcow2
	I0923 03:34:48.674780    9216 main.go:141] libmachine: STDOUT: 
	I0923 03:34:48.674846    9216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:34:48.674907    9216 fix.go:56] duration metric: took 18.98175ms for fixHost
	I0923 03:34:48.674921    9216 start.go:83] releasing machines lock for "kubernetes-upgrade-915000", held for 19.11675ms
	W0923 03:34:48.675070    9216 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-915000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-915000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:34:48.682978    9216 out.go:201] 
	W0923 03:34:48.686042    9216 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:34:48.686073    9216 out.go:270] * 
	* 
	W0923 03:34:48.687744    9216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:34:48.697773    9216 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-915000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-915000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-915000 version --output=json: exit status 1 (54.663375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-915000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-23 03:34:48.766547 -0700 PDT m=+943.594563626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-915000 -n kubernetes-upgrade-915000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-915000 -n kubernetes-upgrade-915000: exit status 7 (31.980708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-915000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-915000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-915000
--- FAIL: TestKubernetesUpgrade (18.85s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19689
- KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1547888509/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.97s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19689
- KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current719164336/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3303131119 start -p stopped-upgrade-516000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3303131119 start -p stopped-upgrade-516000 --memory=2200 --vm-driver=qemu2 : (41.043840583s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3303131119 -p stopped-upgrade-516000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3303131119 -p stopped-upgrade-516000 stop: (12.112528125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-516000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-516000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.886699s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-516000" primary control-plane node in "stopped-upgrade-516000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-516000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:35:43.015087    9267 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:35:43.015264    9267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:35:43.015268    9267 out.go:358] Setting ErrFile to fd 2...
	I0923 03:35:43.015271    9267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:35:43.015443    9267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:35:43.016802    9267 out.go:352] Setting JSON to false
	I0923 03:35:43.036618    9267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5714,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:35:43.036691    9267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:35:43.041751    9267 out.go:177] * [stopped-upgrade-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:35:43.048736    9267 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:35:43.048796    9267 notify.go:220] Checking for updates...
	I0923 03:35:43.056578    9267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:35:43.060762    9267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:35:43.063713    9267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:35:43.066756    9267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:35:43.069710    9267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:35:43.073997    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:35:43.076664    9267 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 03:35:43.079735    9267 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:35:43.082706    9267 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:35:43.089731    9267 start.go:297] selected driver: qemu2
	I0923 03:35:43.089736    9267 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:35:43.089784    9267 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:35:43.092314    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:35:43.092343    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:35:43.092379    9267 start.go:340] cluster config:
	{Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:35:43.092441    9267 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:35:43.100704    9267 out.go:177] * Starting "stopped-upgrade-516000" primary control-plane node in "stopped-upgrade-516000" cluster
	I0923 03:35:43.104690    9267 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:35:43.104704    9267 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 03:35:43.104712    9267 cache.go:56] Caching tarball of preloaded images
	I0923 03:35:43.104763    9267 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:35:43.104768    9267 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 03:35:43.104815    9267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/config.json ...
	I0923 03:35:43.105233    9267 start.go:360] acquireMachinesLock for stopped-upgrade-516000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:35:43.105268    9267 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "stopped-upgrade-516000"
	I0923 03:35:43.105276    9267 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:35:43.105281    9267 fix.go:54] fixHost starting: 
	I0923 03:35:43.105383    9267 fix.go:112] recreateIfNeeded on stopped-upgrade-516000: state=Stopped err=<nil>
	W0923 03:35:43.105391    9267 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:35:43.113729    9267 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-516000" ...
	I0923 03:35:43.117740    9267 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:35:43.117808    9267 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51483-:22,hostfwd=tcp::51484-:2376,hostname=stopped-upgrade-516000 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/disk.qcow2
	I0923 03:35:43.164967    9267 main.go:141] libmachine: STDOUT: 
	I0923 03:35:43.164999    9267 main.go:141] libmachine: STDERR: 
	I0923 03:35:43.165009    9267 main.go:141] libmachine: Waiting for VM to start (ssh -p 51483 docker@127.0.0.1)...
	I0923 03:36:03.555915    9267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/config.json ...
	I0923 03:36:03.556146    9267 machine.go:93] provisionDockerMachine start ...
	I0923 03:36:03.556196    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.556327    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.556331    9267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 03:36:03.620919    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 03:36:03.620935    9267 buildroot.go:166] provisioning hostname "stopped-upgrade-516000"
	I0923 03:36:03.621000    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.621134    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.621140    9267 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-516000 && echo "stopped-upgrade-516000" | sudo tee /etc/hostname
	I0923 03:36:03.690425    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-516000
	
	I0923 03:36:03.690492    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.690598    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.690609    9267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-516000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-516000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-516000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 03:36:03.755578    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 03:36:03.755595    9267 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19689-6600/.minikube CaCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19689-6600/.minikube}
	I0923 03:36:03.755603    9267 buildroot.go:174] setting up certificates
	I0923 03:36:03.755608    9267 provision.go:84] configureAuth start
	I0923 03:36:03.755613    9267 provision.go:143] copyHostCerts
	I0923 03:36:03.755688    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem, removing ...
	I0923 03:36:03.755694    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem
	I0923 03:36:03.755805    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.pem (1078 bytes)
	I0923 03:36:03.755989    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem, removing ...
	I0923 03:36:03.755993    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem
	I0923 03:36:03.756056    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/cert.pem (1123 bytes)
	I0923 03:36:03.756167    9267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem, removing ...
	I0923 03:36:03.756171    9267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem
	I0923 03:36:03.756212    9267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19689-6600/.minikube/key.pem (1675 bytes)
	I0923 03:36:03.756307    9267 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-516000 san=[127.0.0.1 localhost minikube stopped-upgrade-516000]
	I0923 03:36:03.862839    9267 provision.go:177] copyRemoteCerts
	I0923 03:36:03.862876    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 03:36:03.862889    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:03.897920    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 03:36:03.904435    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 03:36:03.911684    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 03:36:03.918843    9267 provision.go:87] duration metric: took 163.228583ms to configureAuth
	I0923 03:36:03.918852    9267 buildroot.go:189] setting minikube options for container-runtime
	I0923 03:36:03.918965    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:36:03.919004    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.919101    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.919106    9267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 03:36:03.981702    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 03:36:03.981710    9267 buildroot.go:70] root file system type: tmpfs
	I0923 03:36:03.981769    9267 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 03:36:03.981821    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:03.981926    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:03.981958    9267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 03:36:04.050018    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 03:36:04.050083    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:04.050203    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:04.050215    9267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 03:36:04.392146    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 03:36:04.392160    9267 machine.go:96] duration metric: took 836.027167ms to provisionDockerMachine
	I0923 03:36:04.392168    9267 start.go:293] postStartSetup for "stopped-upgrade-516000" (driver="qemu2")
	I0923 03:36:04.392175    9267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 03:36:04.392246    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 03:36:04.392255    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:04.428265    9267 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 03:36:04.429603    9267 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 03:36:04.429610    9267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/addons for local assets ...
	I0923 03:36:04.429682    9267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19689-6600/.minikube/files for local assets ...
	I0923 03:36:04.429776    9267 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem -> 71212.pem in /etc/ssl/certs
	I0923 03:36:04.429877    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 03:36:04.432573    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:36:04.439899    9267 start.go:296] duration metric: took 47.726583ms for postStartSetup
	I0923 03:36:04.439915    9267 fix.go:56] duration metric: took 21.335105833s for fixHost
	I0923 03:36:04.439955    9267 main.go:141] libmachine: Using SSH client type: native
	I0923 03:36:04.440060    9267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105181c00] 0x105184440 <nil>  [] 0s} localhost 51483 <nil> <nil>}
	I0923 03:36:04.440066    9267 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 03:36:04.505113    9267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727087764.535849879
	
	I0923 03:36:04.505121    9267 fix.go:216] guest clock: 1727087764.535849879
	I0923 03:36:04.505129    9267 fix.go:229] Guest: 2024-09-23 03:36:04.535849879 -0700 PDT Remote: 2024-09-23 03:36:04.439917 -0700 PDT m=+21.456803043 (delta=95.932879ms)
	I0923 03:36:04.505143    9267 fix.go:200] guest clock delta is within tolerance: 95.932879ms
	I0923 03:36:04.505147    9267 start.go:83] releasing machines lock for "stopped-upgrade-516000", held for 21.400347334s
	I0923 03:36:04.505212    9267 ssh_runner.go:195] Run: cat /version.json
	I0923 03:36:04.505220    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:36:04.505243    9267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 03:36:04.505267    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	W0923 03:36:04.505808    9267 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51483: connect: connection refused
	I0923 03:36:04.505834    9267 retry.go:31] will retry after 231.011069ms: dial tcp [::1]:51483: connect: connection refused
	W0923 03:36:04.537028    9267 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 03:36:04.537092    9267 ssh_runner.go:195] Run: systemctl --version
	I0923 03:36:04.538952    9267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 03:36:04.540578    9267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 03:36:04.540614    9267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 03:36:04.543854    9267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 03:36:04.548731    9267 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 03:36:04.548743    9267 start.go:495] detecting cgroup driver to use...
	I0923 03:36:04.548834    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:36:04.556144    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 03:36:04.559591    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 03:36:04.563059    9267 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 03:36:04.563094    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 03:36:04.566080    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:36:04.569023    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 03:36:04.572291    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 03:36:04.575834    9267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 03:36:04.579339    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 03:36:04.582337    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 03:36:04.585142    9267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 03:36:04.588405    9267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 03:36:04.591471    9267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 03:36:04.594192    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:04.658974    9267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 03:36:04.664682    9267 start.go:495] detecting cgroup driver to use...
	I0923 03:36:04.664764    9267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 03:36:04.670777    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:36:04.675662    9267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 03:36:04.683372    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 03:36:04.688277    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 03:36:04.693811    9267 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 03:36:04.731260    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 03:36:04.736172    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 03:36:04.743087    9267 ssh_runner.go:195] Run: which cri-dockerd
	I0923 03:36:04.744554    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 03:36:04.747649    9267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 03:36:04.752706    9267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 03:36:04.813259    9267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 03:36:04.872767    9267 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 03:36:04.872820    9267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 03:36:04.877989    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:04.936576    9267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:36:06.073228    9267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.136659708s)
	I0923 03:36:06.073293    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 03:36:06.078243    9267 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 03:36:06.083807    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:36:06.089473    9267 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 03:36:06.153052    9267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 03:36:06.214708    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:06.274457    9267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 03:36:06.280208    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 03:36:06.284982    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:06.351053    9267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 03:36:06.388650    9267 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 03:36:06.388750    9267 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 03:36:06.391421    9267 start.go:563] Will wait 60s for crictl version
	I0923 03:36:06.391478    9267 ssh_runner.go:195] Run: which crictl
	I0923 03:36:06.392880    9267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 03:36:06.407467    9267 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 03:36:06.407553    9267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:36:06.423584    9267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 03:36:06.444286    9267 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 03:36:06.444420    9267 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 03:36:06.445729    9267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 03:36:06.449159    9267 kubeadm.go:883] updating cluster {Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 03:36:06.449206    9267 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 03:36:06.449253    9267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:36:06.459663    9267 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:36:06.459681    9267 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:36:06.459729    9267 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:36:06.463219    9267 ssh_runner.go:195] Run: which lz4
	I0923 03:36:06.464492    9267 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 03:36:06.465900    9267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 03:36:06.465911    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 03:36:07.380692    9267 docker.go:649] duration metric: took 916.262291ms to copy over tarball
	I0923 03:36:07.380756    9267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 03:36:08.528354    9267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147609459s)
	I0923 03:36:08.528367    9267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 03:36:08.544219    9267 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 03:36:08.547625    9267 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 03:36:08.552659    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:08.615756    9267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 03:36:10.301822    9267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.686086459s)
	I0923 03:36:10.301949    9267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 03:36:10.314043    9267 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 03:36:10.314053    9267 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 03:36:10.314059    9267 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 03:36:10.317600    9267 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:10.319367    9267 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.321245    9267 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.321402    9267 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:10.323733    9267 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.323843    9267 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.325266    9267 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.325395    9267 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 03:36:10.326577    9267 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.326651    9267 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.327746    9267 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 03:36:10.327940    9267 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.329231    9267 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.329328    9267 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.329887    9267 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.330708    9267 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.769115    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 03:36:10.775902    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.783588    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.789248    9267 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 03:36:10.789276    9267 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 03:36:10.789344    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 03:36:10.791052    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.805506    9267 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 03:36:10.805529    9267 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	W0923 03:36:10.805510    9267 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 03:36:10.805593    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 03:36:10.805676    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.808636    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.811249    9267 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 03:36:10.811267    9267 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.811321    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 03:36:10.826841    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.828799    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 03:36:10.828821    9267 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 03:36:10.828836    9267 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.828877    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 03:36:10.828913    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 03:36:10.838877    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 03:36:10.847428    9267 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 03:36:10.847450    9267 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.847470    9267 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 03:36:10.847481    9267 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.847515    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 03:36:10.847521    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 03:36:10.847617    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:36:10.847680    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 03:36:10.851810    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 03:36:10.851834    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 03:36:10.851915    9267 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 03:36:10.851931    9267 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.851974    9267 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 03:36:10.859174    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 03:36:10.874312    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 03:36:10.874337    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 03:36:10.874386    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 03:36:10.874491    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 03:36:10.874604    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:36:10.884245    9267 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 03:36:10.884258    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 03:36:10.887816    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 03:36:10.887864    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 03:36:10.887877    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 03:36:10.947096    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 03:36:10.989822    9267 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 03:36:10.989841    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 03:36:11.084418    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 03:36:11.179047    9267 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 03:36:11.179063    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0923 03:36:11.211092    9267 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 03:36:11.211214    9267 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.338404    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 03:36:11.338495    9267 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 03:36:11.338518    9267 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.338595    9267 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:36:11.353210    9267 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 03:36:11.353343    9267 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:36:11.354900    9267 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 03:36:11.354915    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 03:36:11.388323    9267 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 03:36:11.388335    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 03:36:11.627056    9267 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 03:36:11.627097    9267 cache_images.go:92] duration metric: took 1.313059292s to LoadCachedImages
	W0923 03:36:11.627142    9267 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 03:36:11.627150    9267 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 03:36:11.627202    9267 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-516000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 03:36:11.627282    9267 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 03:36:11.642080    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:36:11.642093    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:36:11.642099    9267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 03:36:11.642108    9267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-516000 NodeName:stopped-upgrade-516000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 03:36:11.642173    9267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-516000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 03:36:11.642236    9267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 03:36:11.645082    9267 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 03:36:11.645118    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 03:36:11.648137    9267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 03:36:11.653286    9267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 03:36:11.658077    9267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 03:36:11.663341    9267 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 03:36:11.664412    9267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 03:36:11.667854    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:36:11.730438    9267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:36:11.737882    9267 certs.go:68] Setting up /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000 for IP: 10.0.2.15
	I0923 03:36:11.737895    9267 certs.go:194] generating shared ca certs ...
	I0923 03:36:11.737903    9267 certs.go:226] acquiring lock for ca certs: {Name:mk939083d37f22e3f0ca1f4aad8fa886b4374915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.738065    9267 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key
	I0923 03:36:11.738112    9267 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key
	I0923 03:36:11.738117    9267 certs.go:256] generating profile certs ...
	I0923 03:36:11.738177    9267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key
	I0923 03:36:11.738193    9267 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c
	I0923 03:36:11.738204    9267 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 03:36:11.812636    9267 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c ...
	I0923 03:36:11.812648    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c: {Name:mk37feb399682a06992ffd6d3e9a9124a477901a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.812947    9267 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c ...
	I0923 03:36:11.812952    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c: {Name:mk39f6efef4910fd0322c7c95819c2a4737e57e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.813089    9267 certs.go:381] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt.cd07b11c -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt
	I0923 03:36:11.813220    9267 certs.go:385] copying /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key.cd07b11c -> /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key
	I0923 03:36:11.813353    9267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.key
	I0923 03:36:11.813499    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem (1338 bytes)
	W0923 03:36:11.813522    9267 certs.go:480] ignoring /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121_empty.pem, impossibly tiny 0 bytes
	I0923 03:36:11.813528    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 03:36:11.813552    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem (1078 bytes)
	I0923 03:36:11.813570    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem (1123 bytes)
	I0923 03:36:11.813602    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/key.pem (1675 bytes)
	I0923 03:36:11.813640    9267 certs.go:484] found cert: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem (1708 bytes)
	I0923 03:36:11.813971    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 03:36:11.820847    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 03:36:11.827868    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 03:36:11.834803    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 03:36:11.841636    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 03:36:11.848257    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 03:36:11.855492    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 03:36:11.862271    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 03:36:11.868889    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 03:36:11.876166    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/7121.pem --> /usr/share/ca-certificates/7121.pem (1338 bytes)
	I0923 03:36:11.883189    9267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/ssl/certs/71212.pem --> /usr/share/ca-certificates/71212.pem (1708 bytes)
	I0923 03:36:11.889692    9267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 03:36:11.894534    9267 ssh_runner.go:195] Run: openssl version
	I0923 03:36:11.896322    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 03:36:11.899695    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.901167    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.901190    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 03:36:11.902946    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 03:36:11.905913    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7121.pem && ln -fs /usr/share/ca-certificates/7121.pem /etc/ssl/certs/7121.pem"
	I0923 03:36:11.908756    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.910147    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:19 /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.910174    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7121.pem
	I0923 03:36:11.911967    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7121.pem /etc/ssl/certs/51391683.0"
	I0923 03:36:11.915470    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71212.pem && ln -fs /usr/share/ca-certificates/71212.pem /etc/ssl/certs/71212.pem"
	I0923 03:36:11.918502    9267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.919915    9267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:19 /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.919937    9267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71212.pem
	I0923 03:36:11.921721    9267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71212.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 03:36:11.924753    9267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 03:36:11.926284    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 03:36:11.928334    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 03:36:11.930132    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 03:36:11.932057    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 03:36:11.933885    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 03:36:11.935541    9267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 03:36:11.937318    9267 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 03:36:11.937394    9267 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:36:11.947271    9267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 03:36:11.950182    9267 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 03:36:11.950195    9267 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 03:36:11.950223    9267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 03:36:11.952953    9267 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 03:36:11.953243    9267 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-516000" does not appear in /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:36:11.953350    9267 kubeconfig.go:62] /Users/jenkins/minikube-integration/19689-6600/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-516000" cluster setting kubeconfig missing "stopped-upgrade-516000" context setting]
	I0923 03:36:11.953550    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:36:11.953990    9267 kapi.go:59] client config for stopped-upgrade-516000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10675a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:36:11.954316    9267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 03:36:11.956945    9267 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-516000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 03:36:11.956949    9267 kubeadm.go:1160] stopping kube-system containers ...
	I0923 03:36:11.956995    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 03:36:11.967608    9267 docker.go:483] Stopping containers: [66fdd05327e1 b7f7027cb0f6 560a63128e94 c1860da13243 d3552f071944 3f6ad30554d6 e970dd8e9394 16c4caebd050]
	I0923 03:36:11.967690    9267 ssh_runner.go:195] Run: docker stop 66fdd05327e1 b7f7027cb0f6 560a63128e94 c1860da13243 d3552f071944 3f6ad30554d6 e970dd8e9394 16c4caebd050
	I0923 03:36:11.978321    9267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 03:36:11.983841    9267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:36:11.986950    9267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:36:11.986955    9267 kubeadm.go:157] found existing configuration files:
	
	I0923 03:36:11.986979    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf
	I0923 03:36:11.989455    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:36:11.989484    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:36:11.992339    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf
	I0923 03:36:11.995409    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:36:11.995434    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:36:11.998193    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf
	I0923 03:36:12.000674    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:36:12.000694    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:36:12.003660    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf
	I0923 03:36:12.006542    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:36:12.006566    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:36:12.008992    9267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:36:12.012125    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.034896    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.343736    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.458246    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.486858    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 03:36:12.512939    9267 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:36:12.513019    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:13.014781    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:13.515048    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:36:13.519491    9267 api_server.go:72] duration metric: took 1.00657525s to wait for apiserver process to appear ...
	I0923 03:36:13.519501    9267 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:36:13.519511    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:18.521485    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:18.521526    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:23.521683    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:23.521721    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:28.521937    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:28.521969    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:33.522361    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:33.522467    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:38.523473    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:38.523535    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:43.524419    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:43.524440    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:48.525454    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:48.525496    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:53.526988    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:53.527057    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:36:58.529208    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:36:58.529256    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:03.531386    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:03.531408    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:08.533477    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:08.533510    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:13.535673    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:13.535956    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:13.556912    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:13.557029    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:13.571808    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:13.571906    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:13.584051    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:13.584137    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:13.595105    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:13.595190    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:13.605577    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:13.605660    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:13.616444    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:13.616527    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:13.626265    9267 logs.go:276] 0 containers: []
	W0923 03:37:13.626278    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:13.626358    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:13.639805    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:13.639827    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:13.639833    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:13.653277    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:13.653289    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:13.671104    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:13.671118    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:13.696644    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:13.696652    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:13.736515    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:13.736525    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:13.777710    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:13.777724    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:13.789130    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:13.789142    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:13.794010    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:13.794016    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:13.809640    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:13.809655    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:13.824493    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:13.824509    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:13.836320    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:13.836333    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:13.939188    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:13.939202    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:13.950276    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:13.950288    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:13.964788    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:13.964799    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:13.975883    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:13.975896    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:13.989802    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:13.989814    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:14.004386    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:14.004397    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:16.520184    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:21.520500    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:21.520930    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:21.553423    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:21.553574    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:21.573380    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:21.573509    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:21.592117    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:21.592204    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:21.603774    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:21.603848    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:21.614265    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:21.614364    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:21.624940    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:21.625019    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:21.636014    9267 logs.go:276] 0 containers: []
	W0923 03:37:21.636025    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:21.636095    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:21.646909    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:21.646928    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:21.646935    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:21.659547    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:21.659558    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:21.674773    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:21.674784    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:21.686292    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:21.686303    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:21.728610    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:21.728620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:21.744465    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:21.744475    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:21.757372    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:21.757383    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:21.777058    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:21.777069    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:21.792278    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:21.792286    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:21.831892    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:21.831904    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:21.836001    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:21.836008    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:21.849707    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:21.849719    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:21.875097    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:21.875105    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:21.887539    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:21.887548    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:21.899707    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:21.899723    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:21.940893    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:21.940908    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:21.955511    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:21.955524    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:24.469432    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:29.471407    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:29.471983    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:29.506246    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:29.506403    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:29.526226    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:29.526339    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:29.541649    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:29.541741    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:29.554191    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:29.554282    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:29.566999    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:29.567081    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:29.578164    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:29.578253    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:29.589042    9267 logs.go:276] 0 containers: []
	W0923 03:37:29.589053    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:29.589116    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:29.600835    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:29.600862    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:29.600868    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:29.619419    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:29.619432    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:29.631620    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:29.631629    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:29.649271    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:29.649284    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:29.660848    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:29.660863    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:29.672848    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:29.672860    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:29.712972    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:29.712980    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:29.749179    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:29.749193    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:29.764321    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:29.764331    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:29.775472    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:29.775484    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:29.799023    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:29.799034    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:29.813311    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:29.813320    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:29.852033    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:29.852044    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:29.863627    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:29.863640    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:29.877963    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:29.877973    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:29.895185    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:29.895199    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:29.899325    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:29.899331    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:32.414449    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:37.416653    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:37.416816    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:37.431547    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:37.431627    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:37.442286    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:37.442361    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:37.452735    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:37.452820    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:37.463451    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:37.463530    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:37.473871    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:37.473950    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:37.484852    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:37.484934    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:37.494594    9267 logs.go:276] 0 containers: []
	W0923 03:37:37.494611    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:37.494681    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:37.505339    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:37.505357    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:37.505362    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:37.543115    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:37.543131    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:37.579453    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:37.579464    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:37.594686    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:37.594697    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:37.611769    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:37.611780    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:37.625953    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:37.625963    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:37.640535    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:37.640546    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:37.652209    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:37.652225    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:37.663772    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:37.663782    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:37.675705    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:37.675714    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:37.719109    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:37.719122    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:37.732000    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:37.732011    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:37.743575    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:37.743586    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:37.755275    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:37.755286    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:37.778901    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:37.778908    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:37.782600    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:37.782606    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:37.802654    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:37.802670    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:40.321044    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:45.323133    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:45.323288    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:45.340321    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:45.340427    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:45.354007    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:45.354096    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:45.365293    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:45.365367    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:45.379160    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:45.379244    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:45.389822    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:45.389901    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:45.400796    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:45.400869    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:45.411197    9267 logs.go:276] 0 containers: []
	W0923 03:37:45.411210    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:45.411282    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:45.421728    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:45.421746    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:45.421753    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:45.433306    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:45.433322    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:45.471669    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:45.471677    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:45.476181    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:45.476188    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:45.512883    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:45.512898    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:45.535075    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:45.535085    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:45.547300    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:45.547310    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:45.564364    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:45.564374    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:45.575315    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:45.575330    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:45.586162    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:45.586172    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:45.597399    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:45.597414    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:45.611628    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:45.611637    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:45.650022    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:45.650037    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:45.664959    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:45.664970    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:45.676802    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:45.676812    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:45.692506    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:45.692518    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:45.716868    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:45.716878    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:48.232765    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:37:53.233060    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:37:53.233309    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:37:53.253630    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:37:53.253744    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:37:53.267939    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:37:53.268028    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:37:53.279758    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:37:53.279844    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:37:53.294673    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:37:53.294754    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:37:53.305501    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:37:53.305577    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:37:53.315644    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:37:53.315725    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:37:53.325972    9267 logs.go:276] 0 containers: []
	W0923 03:37:53.325983    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:37:53.326049    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:37:53.336430    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:37:53.336446    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:37:53.336452    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:37:53.354049    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:37:53.354059    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:37:53.366188    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:37:53.366198    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:37:53.379753    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:37:53.379763    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:37:53.391365    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:37:53.391378    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:37:53.417184    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:37:53.417191    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:37:53.434258    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:37:53.434268    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:37:53.449337    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:37:53.449353    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:37:53.461297    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:37:53.461314    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:37:53.472742    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:37:53.472754    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:37:53.476882    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:37:53.476890    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:37:53.492825    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:37:53.492838    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:37:53.534388    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:37:53.534399    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:37:53.545871    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:37:53.545884    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:37:53.583116    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:37:53.583127    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:37:53.619649    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:37:53.619664    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:37:53.637838    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:37:53.637853    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:37:56.157993    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:01.160299    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:01.160408    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:01.171566    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:01.171646    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:01.182363    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:01.182446    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:01.193304    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:01.193388    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:01.205640    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:01.205721    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:01.216040    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:01.216117    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:01.227341    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:01.227445    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:01.237484    9267 logs.go:276] 0 containers: []
	W0923 03:38:01.237495    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:01.237563    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:01.248010    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:01.248026    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:01.248031    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:01.261974    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:01.261984    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:01.276295    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:01.276306    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:01.293609    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:01.293620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:01.310097    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:01.310106    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:01.325189    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:01.325199    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:01.339477    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:01.339487    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:01.351447    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:01.351458    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:01.363646    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:01.363656    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:01.375283    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:01.375293    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:01.387202    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:01.387214    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:01.425535    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:01.425547    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:01.429990    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:01.429998    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:01.444406    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:01.444421    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:01.456321    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:01.456330    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:01.480435    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:01.480442    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:01.521807    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:01.521825    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:04.061314    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:09.063777    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:09.064089    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:09.094126    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:09.094278    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:09.111512    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:09.111616    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:09.125040    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:09.125137    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:09.136136    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:09.136221    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:09.146053    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:09.146135    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:09.161463    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:09.161538    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:09.173023    9267 logs.go:276] 0 containers: []
	W0923 03:38:09.173036    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:09.173103    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:09.183516    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:09.183534    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:09.183539    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:09.198768    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:09.198779    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:09.212004    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:09.212016    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:09.229709    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:09.229722    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:09.248298    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:09.248309    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:09.259984    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:09.259994    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:09.298786    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:09.298795    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:09.313289    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:09.313302    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:09.324805    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:09.324818    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:09.336883    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:09.336893    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:09.374373    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:09.374389    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:09.386124    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:09.386136    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:09.397871    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:09.397882    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:09.402704    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:09.402711    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:09.437859    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:09.437872    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:09.451550    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:09.451560    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:09.469245    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:09.469258    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:11.995445    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:16.997780    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:16.998008    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:17.019986    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:17.020111    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:17.036327    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:17.036420    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:17.048495    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:17.048579    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:17.059614    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:17.059694    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:17.073964    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:17.074042    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:17.084770    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:17.084858    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:17.095300    9267 logs.go:276] 0 containers: []
	W0923 03:38:17.095317    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:17.095388    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:17.106118    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:17.106136    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:17.106142    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:17.118005    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:17.118019    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:17.132180    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:17.132190    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:17.136344    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:17.136351    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:17.150079    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:17.150091    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:17.164373    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:17.164384    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:17.202320    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:17.202335    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:17.228143    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:17.228155    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:17.242537    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:17.242549    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:17.254620    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:17.254637    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:17.269777    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:17.269787    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:17.287749    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:17.287759    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:17.299123    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:17.299135    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:17.310909    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:17.310920    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:17.322593    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:17.322601    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:17.359609    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:17.359618    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:17.394753    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:17.394763    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:19.915851    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:24.918156    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:24.918392    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:24.941870    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:24.942060    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:24.959602    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:24.959683    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:24.972284    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:24.972373    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:24.983374    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:24.983455    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:24.993867    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:24.993947    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:25.004478    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:25.004563    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:25.014871    9267 logs.go:276] 0 containers: []
	W0923 03:38:25.014882    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:25.014949    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:25.025546    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:25.025566    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:25.025571    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:25.037531    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:25.037545    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:25.062379    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:25.062388    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:25.087650    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:25.087659    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:25.099937    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:25.099949    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:25.135381    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:25.135398    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:25.175342    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:25.175357    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:25.189872    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:25.189891    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:25.205514    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:25.205528    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:25.219988    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:25.220003    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:25.231735    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:25.231748    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:25.271616    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:25.271630    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:25.275771    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:25.275777    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:25.293021    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:25.293035    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:25.312759    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:25.312770    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:25.328068    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:25.328081    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:25.339169    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:25.339180    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:27.861491    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:32.862938    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:32.863185    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:32.882375    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:32.882481    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:32.896496    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:32.896591    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:32.908673    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:32.908756    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:32.919404    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:32.919484    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:32.929972    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:32.930056    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:32.940437    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:32.940514    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:32.957302    9267 logs.go:276] 0 containers: []
	W0923 03:38:32.957313    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:32.957383    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:32.967754    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:32.967774    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:32.967779    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:32.971976    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:32.971982    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:32.986351    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:32.986362    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:33.000750    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:33.000760    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:33.012023    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:33.012034    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:33.027133    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:33.027143    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:33.051033    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:33.051043    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:33.062660    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:33.062670    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:33.099753    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:33.099767    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:33.134013    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:33.134024    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:33.148566    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:33.148581    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:33.161629    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:33.161641    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:33.176926    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:33.176937    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:33.194471    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:33.194484    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:33.210358    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:33.210374    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:33.248625    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:33.248637    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:33.265954    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:33.265970    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:35.779665    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:40.782095    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:40.782324    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:40.800763    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:40.800919    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:40.814351    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:40.814445    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:40.826253    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:40.826338    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:40.837025    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:40.837113    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:40.848041    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:40.848121    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:40.858543    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:40.858631    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:40.868493    9267 logs.go:276] 0 containers: []
	W0923 03:38:40.868502    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:40.868567    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:40.878653    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:40.878671    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:40.878677    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:40.915100    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:40.915113    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:40.952741    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:40.952753    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:40.964995    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:40.965005    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:40.984746    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:40.984755    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:41.007584    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:41.007591    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:41.011519    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:41.011527    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:41.023080    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:41.023092    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:41.034924    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:41.034936    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:41.046228    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:41.046239    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:41.085722    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:41.085731    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:41.100288    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:41.100297    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:41.117546    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:41.117561    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:41.129037    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:41.129046    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:41.143332    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:41.143342    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:41.157760    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:41.157772    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:41.169409    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:41.169418    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:43.686278    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:48.688714    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:48.688935    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:48.711962    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:48.712081    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:48.727519    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:48.727623    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:48.741727    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:48.741810    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:48.753256    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:48.753335    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:48.763607    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:48.763686    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:48.774089    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:48.774168    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:48.783956    9267 logs.go:276] 0 containers: []
	W0923 03:38:48.783969    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:48.784036    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:48.794563    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:48.794581    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:48.794587    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:48.806430    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:48.806440    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:48.810986    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:48.810995    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:48.846005    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:48.846017    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:48.858413    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:48.858428    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:48.876573    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:48.876584    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:48.914007    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:48.914015    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:48.950876    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:48.950887    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:48.969370    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:48.969386    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:48.983304    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:48.983313    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:48.994360    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:48.994372    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:49.016995    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:49.017001    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:49.029114    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:49.029125    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:49.043734    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:49.043745    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:49.055383    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:49.055395    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:49.075924    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:49.075934    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:49.090031    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:49.090042    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:51.606336    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:38:56.607724    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:38:56.607887    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:38:56.623597    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:38:56.623695    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:38:56.636364    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:38:56.636468    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:38:56.647525    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:38:56.647613    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:38:56.658166    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:38:56.658257    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:38:56.668296    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:38:56.668371    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:38:56.678841    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:38:56.678923    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:38:56.689923    9267 logs.go:276] 0 containers: []
	W0923 03:38:56.689937    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:38:56.690009    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:38:56.700744    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:38:56.700763    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:38:56.700769    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:38:56.714647    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:38:56.714657    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:38:56.719116    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:38:56.719125    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:38:56.732920    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:38:56.732930    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:38:56.756076    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:38:56.756084    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:38:56.768215    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:38:56.768225    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:38:56.782242    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:38:56.782251    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:38:56.799728    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:38:56.799741    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:38:56.811247    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:38:56.811257    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:38:56.822679    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:38:56.822689    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:38:56.857689    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:38:56.857703    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:38:56.894296    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:38:56.894306    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:38:56.908775    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:38:56.908788    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:38:56.923504    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:38:56.923515    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:38:56.934660    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:38:56.934672    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:38:56.946911    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:38:56.946924    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:38:56.958915    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:38:56.958927    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:38:59.500039    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:04.502215    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:04.502440    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:04.523431    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:04.523532    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:04.536950    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:04.537043    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:04.548587    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:04.548673    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:04.559294    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:04.559375    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:04.570623    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:04.570697    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:04.581780    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:04.581869    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:04.592044    9267 logs.go:276] 0 containers: []
	W0923 03:39:04.592055    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:04.592125    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:04.602380    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:04.602399    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:04.602405    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:04.637200    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:04.637211    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:04.651673    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:04.651685    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:04.689103    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:04.689115    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:04.701118    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:04.701128    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:04.741389    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:04.741397    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:04.756154    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:04.756167    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:04.771229    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:04.771242    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:04.785552    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:04.785562    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:04.802758    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:04.802767    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:04.817348    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:04.817360    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:04.841697    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:04.841707    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:04.846141    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:04.846148    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:04.860175    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:04.860184    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:04.871432    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:04.871445    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:04.882718    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:04.882728    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:04.897675    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:04.897686    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:07.411242    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:12.413436    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:12.413636    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:12.428998    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:12.429093    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:12.440153    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:12.440230    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:12.450959    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:12.451047    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:12.461646    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:12.461731    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:12.471961    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:12.472046    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:12.485758    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:12.485838    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:12.495690    9267 logs.go:276] 0 containers: []
	W0923 03:39:12.495701    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:12.495771    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:12.506362    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:12.506380    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:12.506385    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:12.520501    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:12.520510    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:12.563821    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:12.563832    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:12.577829    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:12.577840    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:12.618786    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:12.618799    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:12.633656    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:12.633669    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:12.648548    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:12.648562    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:12.671505    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:12.671524    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:12.684117    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:12.684129    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:12.696126    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:12.696138    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:12.710384    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:12.710395    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:12.724622    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:12.724632    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:12.728802    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:12.728810    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:12.740666    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:12.740678    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:12.752250    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:12.752259    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:12.769216    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:12.769226    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:12.780696    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:12.780706    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:15.318246    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:20.320584    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:20.320874    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:20.351077    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:20.351192    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:20.365646    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:20.365748    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:20.377636    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:20.377717    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:20.388319    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:20.388404    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:20.398477    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:20.398564    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:20.409419    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:20.409500    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:20.425849    9267 logs.go:276] 0 containers: []
	W0923 03:39:20.425862    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:20.425940    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:20.436868    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:20.436884    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:20.436890    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:20.476693    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:20.476702    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:20.488303    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:20.488312    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:20.503096    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:20.503106    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:20.517336    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:20.517348    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:20.531938    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:20.531951    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:20.543565    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:20.543577    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:20.554820    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:20.554829    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:20.566384    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:20.566397    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:20.570555    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:20.570562    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:20.587818    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:20.587832    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:20.607049    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:20.607062    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:20.617850    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:20.617862    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:20.641908    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:20.641915    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:20.676153    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:20.676166    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:20.713587    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:20.713604    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:20.729809    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:20.729822    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:23.245418    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:28.247080    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:28.247313    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:28.263272    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:28.263374    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:28.275198    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:28.275276    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:28.286563    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:28.286646    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:28.297191    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:28.297272    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:28.307783    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:28.307863    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:28.317812    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:28.317894    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:28.328361    9267 logs.go:276] 0 containers: []
	W0923 03:39:28.328374    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:28.328448    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:28.339405    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:28.339426    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:28.339432    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:28.377017    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:28.377029    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:28.415560    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:28.415573    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:28.431103    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:28.431117    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:28.442688    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:28.442700    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:28.446798    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:28.446805    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:28.465055    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:28.465064    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:28.476678    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:28.476690    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:28.491497    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:28.491507    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:28.515872    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:28.515880    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:28.528167    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:28.528176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:28.549690    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:28.549701    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:28.560652    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:28.560666    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:28.595900    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:28.595910    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:28.610165    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:28.610176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:28.632436    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:28.632448    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:28.644348    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:28.644358    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:31.169690    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:36.171978    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:36.172277    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:36.201863    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:36.201996    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:36.224849    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:36.224944    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:36.237780    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:36.237864    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:36.249766    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:36.249852    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:36.261976    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:36.262051    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:36.272754    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:36.272828    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:36.282688    9267 logs.go:276] 0 containers: []
	W0923 03:39:36.282698    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:36.282762    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:36.295114    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:36.295133    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:36.295139    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:36.309999    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:36.310010    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:36.321712    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:36.321724    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:36.333560    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:36.333571    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:36.368452    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:36.368463    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:36.407204    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:36.407215    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:36.423348    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:36.423360    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:36.435068    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:36.435079    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:36.458353    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:36.458365    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:36.470343    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:36.470354    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:36.481413    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:36.481425    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:36.500046    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:36.500059    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:36.514187    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:36.514201    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:36.553503    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:36.553511    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:36.557838    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:36.557845    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:36.572816    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:36.572829    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:36.584464    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:36.584476    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:39.109943    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:44.112487    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:44.112709    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:44.133899    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:44.134070    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:44.155366    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:44.155439    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:44.166586    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:44.166662    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:44.178267    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:44.178367    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:44.190554    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:44.190641    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:44.206683    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:44.206770    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:44.217861    9267 logs.go:276] 0 containers: []
	W0923 03:39:44.217875    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:44.217944    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:44.229694    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:44.229712    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:44.229718    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:44.271413    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:44.271431    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:44.289486    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:44.289501    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:44.302783    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:44.302800    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:44.325003    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:44.325011    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:44.362244    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:44.362251    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:44.398541    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:44.398552    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:44.416809    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:44.416822    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:44.429055    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:44.429065    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:44.440498    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:44.440508    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:44.452193    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:44.452208    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:44.467013    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:44.467023    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:44.478459    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:44.478470    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:44.489421    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:44.489432    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:44.504184    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:44.504194    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:44.518717    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:44.518730    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:44.522633    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:44.522640    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:47.037001    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:52.039748    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:52.040042    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:52.070659    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:52.070809    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:52.091308    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:52.091409    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:39:52.106642    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:39:52.106727    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:39:52.118300    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:39:52.118387    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:39:52.129399    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:39:52.129480    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:39:52.140584    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:39:52.140664    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:39:52.151153    9267 logs.go:276] 0 containers: []
	W0923 03:39:52.151165    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:39:52.151230    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:39:52.162122    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:39:52.162139    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:39:52.162145    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:39:52.176572    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:39:52.176583    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:39:52.216741    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:39:52.216755    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:39:52.251832    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:39:52.251843    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:39:52.274256    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:39:52.274262    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:39:52.285621    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:39:52.285632    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:39:52.323686    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:39:52.323697    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:39:52.335155    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:39:52.335165    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:39:52.349654    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:39:52.349669    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:39:52.366750    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:39:52.366761    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:39:52.383288    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:39:52.383298    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:39:52.395290    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:39:52.395301    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:39:52.400075    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:39:52.400083    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:39:52.414426    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:39:52.414441    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:39:52.428964    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:39:52.428975    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:39:52.440216    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:39:52.440228    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:39:52.452177    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:39:52.452187    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:39:54.968958    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:39:59.971299    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:39:59.971472    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:39:59.984475    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:39:59.984590    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:39:59.998366    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:39:59.998557    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:00.009396    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:40:00.009479    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:00.020245    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:40:00.020327    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:00.030975    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:40:00.031047    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:00.041914    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:40:00.041998    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:00.052114    9267 logs.go:276] 0 containers: []
	W0923 03:40:00.052127    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:00.052203    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:00.062597    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:40:00.062615    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:40:00.062620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:40:00.073846    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:40:00.073858    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:40:00.085610    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:40:00.085619    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:40:00.099890    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:40:00.099903    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:40:00.111186    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:40:00.111198    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:40:00.124546    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:40:00.124559    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:40:00.163411    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:40:00.163425    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:40:00.175480    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:40:00.175491    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:40:00.192448    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:40:00.192464    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:00.206243    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:00.206256    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:00.210532    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:40:00.210541    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:40:00.221950    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:40:00.221961    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:40:00.236479    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:00.236490    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:00.271541    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:40:00.271552    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:40:00.286668    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:40:00.286678    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:40:00.308891    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:00.308905    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:00.330614    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:00.330621    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:02.870019    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:07.872152    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:07.872298    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:40:07.884678    9267 logs.go:276] 2 containers: [e56d6672af6c 560a63128e94]
	I0923 03:40:07.884774    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:40:07.900235    9267 logs.go:276] 2 containers: [9f8eabf2019e 66fdd05327e1]
	I0923 03:40:07.900312    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:40:07.912115    9267 logs.go:276] 1 containers: [6752664bb454]
	I0923 03:40:07.912193    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:40:07.922843    9267 logs.go:276] 2 containers: [8cf4fd3b02dd b7f7027cb0f6]
	I0923 03:40:07.922916    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:40:07.933642    9267 logs.go:276] 1 containers: [348b7054823e]
	I0923 03:40:07.933716    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:40:07.944280    9267 logs.go:276] 2 containers: [943fba58ac97 d3552f071944]
	I0923 03:40:07.944362    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:40:07.954128    9267 logs.go:276] 0 containers: []
	W0923 03:40:07.954140    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:40:07.954208    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:40:07.964012    9267 logs.go:276] 2 containers: [df518460d760 ac6774542273]
	I0923 03:40:07.964031    9267 logs.go:123] Gathering logs for kube-apiserver [e56d6672af6c] ...
	I0923 03:40:07.964036    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e56d6672af6c"
	I0923 03:40:07.977848    9267 logs.go:123] Gathering logs for storage-provisioner [df518460d760] ...
	I0923 03:40:07.977857    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df518460d760"
	I0923 03:40:07.989669    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:40:07.989680    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:40:08.010887    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:40:08.010896    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:40:08.022410    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:40:08.022421    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:40:08.026643    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:40:08.026649    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:40:08.062529    9267 logs.go:123] Gathering logs for kube-apiserver [560a63128e94] ...
	I0923 03:40:08.062545    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560a63128e94"
	I0923 03:40:08.101561    9267 logs.go:123] Gathering logs for etcd [66fdd05327e1] ...
	I0923 03:40:08.101572    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66fdd05327e1"
	I0923 03:40:08.115762    9267 logs.go:123] Gathering logs for coredns [6752664bb454] ...
	I0923 03:40:08.115772    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6752664bb454"
	I0923 03:40:08.127600    9267 logs.go:123] Gathering logs for kube-scheduler [8cf4fd3b02dd] ...
	I0923 03:40:08.127611    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cf4fd3b02dd"
	I0923 03:40:08.139480    9267 logs.go:123] Gathering logs for etcd [9f8eabf2019e] ...
	I0923 03:40:08.139490    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8eabf2019e"
	I0923 03:40:08.153808    9267 logs.go:123] Gathering logs for kube-scheduler [b7f7027cb0f6] ...
	I0923 03:40:08.153819    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f7027cb0f6"
	I0923 03:40:08.169498    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:40:08.169509    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:40:08.209255    9267 logs.go:123] Gathering logs for kube-proxy [348b7054823e] ...
	I0923 03:40:08.209273    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348b7054823e"
	I0923 03:40:08.221386    9267 logs.go:123] Gathering logs for kube-controller-manager [943fba58ac97] ...
	I0923 03:40:08.221399    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 943fba58ac97"
	I0923 03:40:08.243514    9267 logs.go:123] Gathering logs for kube-controller-manager [d3552f071944] ...
	I0923 03:40:08.243528    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3552f071944"
	I0923 03:40:08.260275    9267 logs.go:123] Gathering logs for storage-provisioner [ac6774542273] ...
	I0923 03:40:08.260288    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6774542273"
	I0923 03:40:10.773594    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:15.776165    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:15.776283    9267 kubeadm.go:597] duration metric: took 4m3.831458375s to restartPrimaryControlPlane
	W0923 03:40:15.776370    9267 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 03:40:15.776411    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 03:40:16.830709    9267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054307709s)
	I0923 03:40:16.830800    9267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 03:40:16.835772    9267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 03:40:16.838857    9267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 03:40:16.841756    9267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 03:40:16.841763    9267 kubeadm.go:157] found existing configuration files:
	
	I0923 03:40:16.841791    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf
	I0923 03:40:16.844205    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 03:40:16.844233    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 03:40:16.847640    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf
	I0923 03:40:16.850689    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 03:40:16.850714    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 03:40:16.853689    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf
	I0923 03:40:16.856090    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 03:40:16.856119    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 03:40:16.859136    9267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf
	I0923 03:40:16.862030    9267 kubeadm.go:163] "https://control-plane.minikube.internal:51518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 03:40:16.862056    9267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 03:40:16.864501    9267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 03:40:16.882024    9267 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 03:40:16.882052    9267 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 03:40:16.930683    9267 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 03:40:16.930773    9267 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 03:40:16.930819    9267 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 03:40:16.986863    9267 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 03:40:16.990141    9267 out.go:235]   - Generating certificates and keys ...
	I0923 03:40:16.990182    9267 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 03:40:16.990219    9267 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 03:40:16.990255    9267 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 03:40:16.990289    9267 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 03:40:16.990345    9267 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 03:40:16.990376    9267 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 03:40:16.990406    9267 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 03:40:16.990472    9267 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 03:40:16.990506    9267 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 03:40:16.990550    9267 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 03:40:16.990573    9267 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 03:40:16.990605    9267 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 03:40:17.105146    9267 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 03:40:17.238106    9267 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 03:40:17.432813    9267 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 03:40:17.517074    9267 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 03:40:17.546497    9267 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 03:40:17.547649    9267 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 03:40:17.547671    9267 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 03:40:17.612223    9267 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 03:40:17.616399    9267 out.go:235]   - Booting up control plane ...
	I0923 03:40:17.616449    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 03:40:17.616506    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 03:40:17.616559    9267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 03:40:17.616622    9267 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 03:40:17.616821    9267 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 03:40:22.117416    9267 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502135 seconds
	I0923 03:40:22.117474    9267 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 03:40:22.121103    9267 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 03:40:22.636753    9267 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 03:40:22.637027    9267 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-516000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 03:40:23.141969    9267 kubeadm.go:310] [bootstrap-token] Using token: 40qyrs.ydvwxghv2sden5ot
	I0923 03:40:23.144497    9267 out.go:235]   - Configuring RBAC rules ...
	I0923 03:40:23.144551    9267 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 03:40:23.144596    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 03:40:23.148171    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 03:40:23.149154    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 03:40:23.150200    9267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 03:40:23.151032    9267 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 03:40:23.154398    9267 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 03:40:23.327810    9267 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 03:40:23.545967    9267 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 03:40:23.546455    9267 kubeadm.go:310] 
	I0923 03:40:23.546493    9267 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 03:40:23.546496    9267 kubeadm.go:310] 
	I0923 03:40:23.546535    9267 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 03:40:23.546538    9267 kubeadm.go:310] 
	I0923 03:40:23.546554    9267 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 03:40:23.546586    9267 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 03:40:23.546609    9267 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 03:40:23.546616    9267 kubeadm.go:310] 
	I0923 03:40:23.546661    9267 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 03:40:23.546668    9267 kubeadm.go:310] 
	I0923 03:40:23.546695    9267 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 03:40:23.546700    9267 kubeadm.go:310] 
	I0923 03:40:23.546730    9267 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 03:40:23.546776    9267 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 03:40:23.546820    9267 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 03:40:23.546824    9267 kubeadm.go:310] 
	I0923 03:40:23.546872    9267 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 03:40:23.546910    9267 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 03:40:23.546913    9267 kubeadm.go:310] 
	I0923 03:40:23.546958    9267 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 40qyrs.ydvwxghv2sden5ot \
	I0923 03:40:23.547015    9267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f \
	I0923 03:40:23.547027    9267 kubeadm.go:310] 	--control-plane 
	I0923 03:40:23.547031    9267 kubeadm.go:310] 
	I0923 03:40:23.547081    9267 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 03:40:23.547084    9267 kubeadm.go:310] 
	I0923 03:40:23.547133    9267 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 40qyrs.ydvwxghv2sden5ot \
	I0923 03:40:23.547200    9267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:81f714c3d291eb98c715930c3a37747b44257f11dd80fa89b92bbab22cea301f 
	I0923 03:40:23.547425    9267 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 03:40:23.547440    9267 cni.go:84] Creating CNI manager for ""
	I0923 03:40:23.547451    9267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:40:23.551419    9267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 03:40:23.554445    9267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 03:40:23.557607    9267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 03:40:23.563523    9267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 03:40:23.563588    9267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 03:40:23.563613    9267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-516000 minikube.k8s.io/updated_at=2024_09_23T03_40_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=stopped-upgrade-516000 minikube.k8s.io/primary=true
	I0923 03:40:23.610046    9267 ops.go:34] apiserver oom_adj: -16
	I0923 03:40:23.610192    9267 kubeadm.go:1113] duration metric: took 46.662792ms to wait for elevateKubeSystemPrivileges
	I0923 03:40:23.610203    9267 kubeadm.go:394] duration metric: took 4m11.678443875s to StartCluster
	I0923 03:40:23.610212    9267 settings.go:142] acquiring lock: {Name:mk179b7e7e669ed9fc071f7eb5301e91538a634e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:40:23.610311    9267 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:40:23.610748    9267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/kubeconfig: {Name:mkaea904cf5cdb46cc70169c92ea1151561be4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:40:23.610959    9267 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:40:23.611011    9267 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 03:40:23.611089    9267 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-516000"
	I0923 03:40:23.611099    9267 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-516000"
	I0923 03:40:23.611099    9267 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-516000"
	I0923 03:40:23.611109    9267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-516000"
	W0923 03:40:23.611102    9267 addons.go:243] addon storage-provisioner should already be in state true
	I0923 03:40:23.611138    9267 host.go:66] Checking if "stopped-upgrade-516000" exists ...
	I0923 03:40:23.611230    9267 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:40:23.612319    9267 kapi.go:59] client config for stopped-upgrade-516000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/stopped-upgrade-516000/client.key", CAFile:"/Users/jenkins/minikube-integration/19689-6600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10675a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 03:40:23.612447    9267 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-516000"
	W0923 03:40:23.612452    9267 addons.go:243] addon default-storageclass should already be in state true
	I0923 03:40:23.612460    9267 host.go:66] Checking if "stopped-upgrade-516000" exists ...
	I0923 03:40:23.615386    9267 out.go:177] * Verifying Kubernetes components...
	I0923 03:40:23.615799    9267 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 03:40:23.618598    9267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 03:40:23.618606    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:40:23.621378    9267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 03:40:23.625382    9267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 03:40:23.631384    9267 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:40:23.631392    9267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 03:40:23.631400    9267 sshutil.go:53] new ssh client: &{IP:localhost Port:51483 SSHKeyPath:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/stopped-upgrade-516000/id_rsa Username:docker}
	I0923 03:40:23.703228    9267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 03:40:23.708919    9267 api_server.go:52] waiting for apiserver process to appear ...
	I0923 03:40:23.708966    9267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 03:40:23.712527    9267 api_server.go:72] duration metric: took 101.558208ms to wait for apiserver process to appear ...
	I0923 03:40:23.712535    9267 api_server.go:88] waiting for apiserver healthz status ...
	I0923 03:40:23.712542    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:23.725112    9267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 03:40:23.764823    9267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 03:40:24.095165    9267 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 03:40:24.095176    9267 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 03:40:28.714544    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:28.714597    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:33.714811    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:33.714853    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:38.715123    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:38.715148    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:43.715930    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:43.715960    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:48.716473    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:48.716516    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:40:53.717282    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:53.717330    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 03:40:54.096772    9267 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 03:40:54.101645    9267 out.go:177] * Enabled addons: storage-provisioner
	I0923 03:40:54.117565    9267 addons.go:510] duration metric: took 30.507246625s for enable addons: enabled=[storage-provisioner]
	I0923 03:40:58.718367    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:40:58.718406    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:03.719737    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:03.719782    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:08.720275    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:08.720307    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:13.722185    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:13.722216    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:18.724336    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:18.724378    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:23.726511    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:23.726692    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:41:23.737238    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:41:23.737313    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:41:23.747423    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:41:23.747501    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:41:23.757978    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:41:23.758056    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:41:23.768124    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:41:23.768196    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:41:23.778481    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:41:23.778556    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:41:23.789308    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:41:23.789384    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:41:23.799377    9267 logs.go:276] 0 containers: []
	W0923 03:41:23.799391    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:41:23.799459    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:41:23.811102    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:41:23.811121    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:41:23.811128    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:41:23.849621    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:41:23.849630    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:41:23.891668    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:41:23.891683    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:41:23.909720    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:41:23.909736    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:41:23.923850    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:41:23.923861    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:41:23.942570    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:41:23.942585    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:41:23.954763    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:41:23.954777    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:41:23.965972    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:41:23.965982    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:41:23.977416    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:41:23.977426    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:41:23.981512    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:41:23.981521    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:41:24.002165    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:41:24.002176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:41:24.026423    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:41:24.026434    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:41:24.050625    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:41:24.050637    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:41:26.563957    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:31.566722    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:31.567304    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:41:31.606131    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:41:31.606291    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:41:31.628620    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:41:31.628729    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:41:31.642566    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:41:31.642659    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:41:31.654192    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:41:31.654259    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:41:31.666288    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:41:31.666363    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:41:31.678186    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:41:31.678258    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:41:31.688566    9267 logs.go:276] 0 containers: []
	W0923 03:41:31.688579    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:41:31.688646    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:41:31.699080    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:41:31.699097    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:41:31.699102    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:41:31.716586    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:41:31.716596    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:41:31.732372    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:41:31.732387    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:41:31.757274    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:41:31.757280    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:41:31.761395    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:41:31.761402    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:41:31.799668    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:41:31.799682    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:41:31.813938    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:41:31.813951    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:41:31.828386    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:41:31.828399    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:41:31.839788    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:41:31.839802    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:41:31.876580    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:41:31.876590    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:41:31.887999    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:41:31.888007    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:41:31.899413    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:41:31.899422    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:41:31.915400    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:41:31.915409    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:41:34.429392    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:39.430597    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:39.430825    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:41:39.444200    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:41:39.444279    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:41:39.459721    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:41:39.459814    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:41:39.471135    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:41:39.471216    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:41:39.482810    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:41:39.482893    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:41:39.493944    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:41:39.494037    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:41:39.508037    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:41:39.508122    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:41:39.518494    9267 logs.go:276] 0 containers: []
	W0923 03:41:39.518507    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:41:39.518578    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:41:39.529235    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:41:39.529250    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:41:39.529256    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:41:39.567638    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:41:39.567648    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:41:39.572423    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:41:39.572429    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:41:39.584252    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:41:39.584263    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:41:39.595403    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:41:39.595416    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:41:39.618868    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:41:39.618877    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:41:39.631561    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:41:39.631577    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:41:39.666695    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:41:39.666711    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:41:39.680404    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:41:39.680413    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:41:39.696250    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:41:39.696262    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:41:39.707754    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:41:39.707764    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:41:39.723105    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:41:39.723119    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:41:39.745266    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:41:39.745279    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:41:42.264313    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:47.266835    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:47.267185    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:41:47.302715    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:41:47.302871    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:41:47.328157    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:41:47.328294    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:41:47.343278    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:41:47.343373    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:41:47.354866    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:41:47.354944    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:41:47.365918    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:41:47.365996    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:41:47.377030    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:41:47.377112    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:41:47.387192    9267 logs.go:276] 0 containers: []
	W0923 03:41:47.387204    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:41:47.387272    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:41:47.397331    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:41:47.397348    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:41:47.397354    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:41:47.440327    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:41:47.440339    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:41:47.454568    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:41:47.454581    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:41:47.467951    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:41:47.467961    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:41:47.478962    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:41:47.478972    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:41:47.490304    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:41:47.490317    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:41:47.504975    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:41:47.504987    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:41:47.516667    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:41:47.516682    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:41:47.521075    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:41:47.521084    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:41:47.532332    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:41:47.532342    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:41:47.557174    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:41:47.557181    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:41:47.575151    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:41:47.575162    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:41:47.586885    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:41:47.586899    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:41:50.126821    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:41:55.129121    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:41:55.129377    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:41:55.154386    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:41:55.154492    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:41:55.168581    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:41:55.168680    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:41:55.180678    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:41:55.180761    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:41:55.191300    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:41:55.191371    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:41:55.201596    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:41:55.201683    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:41:55.212147    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:41:55.212230    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:41:55.222714    9267 logs.go:276] 0 containers: []
	W0923 03:41:55.222726    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:41:55.222798    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:41:55.232682    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:41:55.232695    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:41:55.232700    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:41:55.244104    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:41:55.244115    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:41:55.255622    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:41:55.255634    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:41:55.273421    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:41:55.273431    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:41:55.296240    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:41:55.296247    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:41:55.333377    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:41:55.333385    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:41:55.337354    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:41:55.337362    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:41:55.352369    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:41:55.352386    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:41:55.363898    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:41:55.363906    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:41:55.375557    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:41:55.375566    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:41:55.388468    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:41:55.388477    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:41:55.430931    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:41:55.430942    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:41:55.445004    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:41:55.445018    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:41:57.961849    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:02.964652    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:02.965171    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:03.001866    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:03.002018    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:03.023127    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:03.023262    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:03.038220    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:42:03.038320    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:03.050522    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:03.050602    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:03.060912    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:03.061001    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:03.075705    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:03.075792    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:03.091941    9267 logs.go:276] 0 containers: []
	W0923 03:42:03.091957    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:03.092036    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:03.109829    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:03.109843    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:03.109849    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:03.113939    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:03.113947    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:03.150044    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:03.150056    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:03.164132    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:03.164143    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:03.175691    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:03.175702    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:03.187003    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:03.187012    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:03.204435    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:03.204446    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:03.229286    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:03.229293    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:03.267302    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:03.267310    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:03.279390    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:03.279405    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:03.294834    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:03.294845    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:03.307287    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:03.307302    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:03.319318    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:03.319329    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:05.837044    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:10.839652    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:10.840186    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:10.880289    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:10.880439    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:10.902260    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:10.902376    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:10.918086    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:42:10.918177    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:10.930904    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:10.930978    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:10.942139    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:10.942216    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:10.952814    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:10.952897    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:10.963241    9267 logs.go:276] 0 containers: []
	W0923 03:42:10.963252    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:10.963323    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:10.974033    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:10.974049    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:10.974054    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:10.985427    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:10.985437    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:11.010048    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:11.010058    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:11.021410    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:11.021422    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:11.059192    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:11.059202    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:11.073606    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:11.073620    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:11.089114    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:11.089129    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:11.100658    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:11.100668    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:11.115411    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:11.115426    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:11.126908    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:11.126918    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:11.144294    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:11.144305    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:11.148682    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:11.148688    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:11.187856    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:11.187866    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:13.702068    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:18.704187    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:18.704317    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:18.715537    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:18.715617    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:18.726287    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:18.726368    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:18.736508    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:42:18.736582    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:18.747905    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:18.747988    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:18.758170    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:18.758254    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:18.768930    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:18.769007    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:18.779145    9267 logs.go:276] 0 containers: []
	W0923 03:42:18.779156    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:18.779219    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:18.789327    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:18.789341    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:18.789346    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:18.800831    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:18.800841    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:18.817025    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:18.817038    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:18.828498    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:18.828511    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:18.840264    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:18.840281    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:18.861814    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:18.861827    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:18.866023    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:18.866031    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:18.902284    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:18.902296    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:18.916602    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:18.916610    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:18.933790    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:18.933800    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:18.956703    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:18.956709    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:18.993948    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:18.993956    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:19.008252    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:19.008267    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:21.520372    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:26.521883    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:26.522280    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:26.551166    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:26.551312    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:26.569591    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:26.569687    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:26.583115    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:42:26.583198    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:26.595013    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:26.595096    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:26.609671    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:26.609754    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:26.622619    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:26.622689    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:26.633256    9267 logs.go:276] 0 containers: []
	W0923 03:42:26.633269    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:26.633339    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:26.643866    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:26.643882    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:26.643887    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:26.656029    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:26.656038    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:26.673747    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:26.673759    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:26.685242    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:26.685252    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:26.723370    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:26.723386    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:26.738477    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:26.738489    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:26.750621    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:26.750631    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:26.763006    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:26.763015    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:26.782438    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:26.782450    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:26.805638    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:26.805644    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:26.817762    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:26.817774    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:26.855455    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:26.855465    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:26.859964    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:26.859974    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:29.376988    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:34.379658    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:34.380210    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:34.421661    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:34.421824    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:34.443540    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:34.443665    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:34.461843    9267 logs.go:276] 2 containers: [261ba45465c0 538c617e2413]
	I0923 03:42:34.461919    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:34.474880    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:34.474961    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:34.486596    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:34.486676    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:34.498092    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:34.498171    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:34.508502    9267 logs.go:276] 0 containers: []
	W0923 03:42:34.508516    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:34.508573    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:34.519345    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:34.519360    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:34.519366    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:34.531809    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:34.531819    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:34.549946    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:34.549956    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:34.574658    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:34.574665    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:34.586296    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:34.586305    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:34.622427    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:34.622439    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:34.636947    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:34.636958    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:34.649660    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:34.649669    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:34.661427    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:34.661441    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:34.676440    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:34.676451    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:34.688624    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:34.688634    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:34.727009    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:34.727017    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:34.731092    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:34.731099    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:37.246900    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:42.249659    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:42.250186    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:42.289794    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:42.289948    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:42.313045    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:42.313164    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:42.328734    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:42:42.328829    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:42.341658    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:42.341740    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:42.353262    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:42.353344    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:42.365010    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:42.365088    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:42.381809    9267 logs.go:276] 0 containers: []
	W0923 03:42:42.381821    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:42.381899    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:42.393110    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:42.393126    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:42.393132    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:42.412288    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:42.412298    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:42.425461    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:42.425472    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:42.462220    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:42.462232    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:42.467145    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:42.467155    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:42.483004    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:42.483020    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:42.502047    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:42.502062    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:42.515440    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:42:42.515452    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:42:42.528425    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:42:42.528437    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:42:42.542920    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:42.542931    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:42.556172    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:42.556184    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:42.582368    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:42.582386    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:42.621514    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:42.621525    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:42.634511    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:42.634522    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:42.653314    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:42.653330    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:45.168355    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:50.170627    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:50.171241    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:50.210682    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:50.210857    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:50.233177    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:50.233285    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:50.248894    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:42:50.248983    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:50.261748    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:50.261832    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:50.273420    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:50.273504    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:50.284866    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:50.284951    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:50.295947    9267 logs.go:276] 0 containers: []
	W0923 03:42:50.295960    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:50.296028    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:50.307779    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:50.307796    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:50.307802    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:50.343220    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:50.343233    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:50.359005    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:42:50.359016    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:42:50.371804    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:50.371815    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:50.384735    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:50.384745    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:50.397194    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:50.397205    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:50.409371    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:50.409387    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:50.446078    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:50.446088    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:50.450844    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:50.450853    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:50.468581    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:50.468590    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:42:50.487182    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:42:50.487192    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:42:50.499314    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:50.499325    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:50.524691    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:50.524699    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:50.539814    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:50.539826    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:50.552275    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:50.552288    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:53.065822    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:42:58.068225    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:42:58.068820    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:42:58.117397    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:42:58.117524    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:42:58.138085    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:42:58.138211    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:42:58.153116    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:42:58.153204    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:42:58.164510    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:42:58.164585    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:42:58.175926    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:42:58.176007    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:42:58.186172    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:42:58.186255    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:42:58.196290    9267 logs.go:276] 0 containers: []
	W0923 03:42:58.196300    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:42:58.196368    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:42:58.206821    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:42:58.206838    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:42:58.206843    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:42:58.243681    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:42:58.243690    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:42:58.255177    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:42:58.255192    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:42:58.280197    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:42:58.280207    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:42:58.319553    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:42:58.319564    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:42:58.333886    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:42:58.333898    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:42:58.348828    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:42:58.348839    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:42:58.363318    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:42:58.363329    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:42:58.380944    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:42:58.380953    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:42:58.392545    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:42:58.392556    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:42:58.403792    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:42:58.403806    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:42:58.407973    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:42:58.407980    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:42:58.419522    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:42:58.419534    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:42:58.431237    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:42:58.431248    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:42:58.443121    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:42:58.443133    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:00.963071    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:05.965491    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:05.966080    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:06.004251    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:06.004413    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:06.025500    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:06.025611    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:06.040916    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:06.041012    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:06.060273    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:06.060357    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:06.071199    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:06.071281    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:06.082123    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:06.082205    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:06.093141    9267 logs.go:276] 0 containers: []
	W0923 03:43:06.093154    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:06.093226    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:06.103762    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:06.103778    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:06.103784    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:06.116399    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:06.116415    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:06.128838    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:06.128848    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:06.140219    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:06.140229    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:06.151859    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:06.151871    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:06.186055    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:06.186066    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:06.200715    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:06.200730    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:06.212061    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:06.212071    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:06.231060    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:06.231073    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:06.257199    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:06.257210    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:06.295378    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:06.295387    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:06.299514    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:06.299521    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:06.314945    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:06.314956    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:06.327345    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:06.327357    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:06.338900    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:06.338913    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:08.856073    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:13.858268    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:13.858749    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:13.893691    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:13.893841    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:13.920410    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:13.920515    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:13.934098    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:13.934192    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:13.945721    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:13.945807    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:13.956328    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:13.956406    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:13.967124    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:13.967194    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:13.977927    9267 logs.go:276] 0 containers: []
	W0923 03:43:13.977941    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:13.978015    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:13.988756    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:13.988773    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:13.988778    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:14.025964    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:14.025971    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:14.030509    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:14.030515    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:14.042841    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:14.042852    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:14.054360    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:14.054368    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:14.066726    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:14.066736    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:14.090035    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:14.090050    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:14.115901    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:14.115908    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:14.127355    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:14.127365    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:14.141492    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:14.141502    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:14.153225    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:14.153235    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:14.167821    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:14.167829    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:14.201415    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:14.201431    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:14.218930    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:14.218944    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:14.232712    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:14.232722    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:16.748583    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:21.749283    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:21.749421    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:21.761717    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:21.761805    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:21.773868    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:21.773958    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:21.788582    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:21.788678    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:21.800544    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:21.800610    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:21.811238    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:21.811314    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:21.822611    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:21.822687    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:21.833156    9267 logs.go:276] 0 containers: []
	W0923 03:43:21.833168    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:21.833237    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:21.847835    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:21.847852    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:21.847858    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:21.859836    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:21.859847    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:21.872543    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:21.872554    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:21.909961    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:21.909971    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:21.921996    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:21.922010    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:21.936911    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:21.936923    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:21.951891    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:21.951904    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:21.975474    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:21.975484    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:21.979840    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:21.979850    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:21.993623    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:21.993636    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:22.005826    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:22.005840    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:22.024036    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:22.024045    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:22.062518    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:22.062525    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:22.100491    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:22.100506    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:22.112296    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:22.112308    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:24.627839    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:29.630671    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:29.631027    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:29.664684    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:29.664845    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:29.686319    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:29.686445    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:29.702088    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:29.702193    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:29.714663    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:29.714749    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:29.725845    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:29.725923    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:29.736130    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:29.736200    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:29.746175    9267 logs.go:276] 0 containers: []
	W0923 03:43:29.746190    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:29.746256    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:29.756412    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:29.756431    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:29.756436    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:29.760614    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:29.760623    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:29.775528    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:29.775538    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:29.786802    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:29.786813    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:29.797872    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:29.797882    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:29.810135    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:29.810145    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:29.847197    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:29.847204    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:29.882758    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:29.882770    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:29.894558    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:29.894573    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:29.906105    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:29.906117    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:29.923708    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:29.923718    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:29.937930    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:29.937943    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:29.954941    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:29.954954    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:29.966295    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:29.966308    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:29.980223    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:29.980233    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:32.504513    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:37.506648    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:37.506953    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:37.531961    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:37.532106    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:37.547935    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:37.548035    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:37.564410    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:37.564492    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:37.574841    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:37.574921    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:37.589695    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:37.589776    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:37.600413    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:37.600497    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:37.610248    9267 logs.go:276] 0 containers: []
	W0923 03:43:37.610259    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:37.610324    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:37.621037    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:37.621051    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:37.621056    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:37.634962    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:37.634977    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:37.646760    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:37.646773    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:37.672248    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:37.672258    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:37.683467    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:37.683476    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:37.695278    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:37.695291    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:37.733980    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:37.733990    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:37.770264    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:37.770280    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:37.782112    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:37.782126    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:37.794286    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:37.794300    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:37.809298    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:37.809308    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:37.820493    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:37.820502    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:37.825337    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:37.825347    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:37.840552    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:37.840562    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:37.852693    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:37.852706    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:40.372396    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:45.373036    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:45.373112    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:45.385442    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:45.385504    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:45.396843    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:45.396931    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:45.407927    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:45.408005    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:45.421980    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:45.422052    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:45.432528    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:45.432592    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:45.443510    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:45.443586    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:45.454980    9267 logs.go:276] 0 containers: []
	W0923 03:43:45.454991    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:45.455044    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:45.467589    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:45.467608    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:45.467614    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:45.482815    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:45.482825    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:45.496837    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:45.496847    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:45.535728    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:45.535739    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:45.540169    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:45.540176    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:45.551759    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:45.551768    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:45.569279    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:45.569288    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:45.593886    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:45.593902    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:45.633281    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:45.633301    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:45.650138    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:45.650153    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:45.663844    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:45.663860    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:45.676979    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:45.676994    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:45.703019    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:45.703038    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:45.715853    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:45.715870    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:45.728954    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:45.728969    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:48.244315    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:43:53.246828    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:43:53.246888    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:43:53.258064    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:43:53.258130    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:43:53.269312    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:43:53.269392    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:43:53.279681    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:43:53.279766    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:43:53.290194    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:43:53.290270    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:43:53.301220    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:43:53.301303    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:43:53.312179    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:43:53.312268    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:43:53.322097    9267 logs.go:276] 0 containers: []
	W0923 03:43:53.322108    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:43:53.322175    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:43:53.332594    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:43:53.332610    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:43:53.332616    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:43:53.372081    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:43:53.372092    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:43:53.386142    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:43:53.386151    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:43:53.404301    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:43:53.404312    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:43:53.430407    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:43:53.430420    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:43:53.442133    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:43:53.442148    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:43:53.453678    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:43:53.453689    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:43:53.464883    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:43:53.464894    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:43:53.469386    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:43:53.469392    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:43:53.481508    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:43:53.481518    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:43:53.496995    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:43:53.497006    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:43:53.532122    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:43:53.532133    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:43:53.546488    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:43:53.546498    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:43:53.559187    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:43:53.559196    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:43:53.570201    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:43:53.570210    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:43:56.084101    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:44:01.086258    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:44:01.086831    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:44:01.120650    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:44:01.120804    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:44:01.140888    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:44:01.141011    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:44:01.159429    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:44:01.159524    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:44:01.171091    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:44:01.171160    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:44:01.186255    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:44:01.186320    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:44:01.196735    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:44:01.196815    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:44:01.207489    9267 logs.go:276] 0 containers: []
	W0923 03:44:01.207506    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:44:01.207574    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:44:01.220373    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:44:01.220390    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:44:01.220397    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:44:01.257802    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:44:01.257812    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:44:01.262161    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:44:01.262169    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:44:01.296265    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:44:01.296278    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:44:01.320209    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:44:01.320222    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:44:01.345857    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:44:01.345870    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:44:01.357780    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:44:01.357793    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:44:01.375323    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:44:01.375336    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:44:01.399928    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:44:01.399935    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:44:01.416729    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:44:01.416739    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:44:01.430789    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:44:01.430798    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:44:01.442078    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:44:01.442088    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:44:01.453612    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:44:01.453626    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:44:01.469164    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:44:01.469173    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:44:01.480988    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:44:01.481003    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:44:03.995311    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:44:08.997618    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:44:08.997698    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:44:09.009791    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:44:09.009850    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:44:09.022951    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:44:09.023023    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:44:09.034113    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:44:09.034177    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:44:09.045838    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:44:09.045926    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:44:09.057414    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:44:09.057470    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:44:09.075123    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:44:09.075201    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:44:09.088937    9267 logs.go:276] 0 containers: []
	W0923 03:44:09.088946    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:44:09.089009    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:44:09.101791    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:44:09.101805    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:44:09.101811    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:44:09.106122    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:44:09.106131    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:44:09.120452    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:44:09.120463    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:44:09.133400    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:44:09.133410    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:44:09.149525    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:44:09.149534    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:44:09.161355    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:44:09.161368    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:44:09.198973    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:44:09.198982    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:44:09.214576    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:44:09.214588    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:44:09.230407    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:44:09.230421    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:44:09.242857    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:44:09.242865    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:44:09.255474    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:44:09.255485    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:44:09.281696    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:44:09.281705    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:44:09.295491    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:44:09.295504    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:44:09.335836    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:44:09.335848    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:44:09.348855    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:44:09.348865    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:44:11.875450    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:44:16.878160    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:44:16.878656    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 03:44:16.917524    9267 logs.go:276] 1 containers: [ebe6021a97cb]
	I0923 03:44:16.917679    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 03:44:16.939676    9267 logs.go:276] 1 containers: [67ed40a9b54a]
	I0923 03:44:16.939823    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 03:44:16.956245    9267 logs.go:276] 4 containers: [6e2fcbbad4a7 cfb02c930992 261ba45465c0 538c617e2413]
	I0923 03:44:16.956334    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 03:44:16.968631    9267 logs.go:276] 1 containers: [f8dbafe1b8f8]
	I0923 03:44:16.968719    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 03:44:16.979850    9267 logs.go:276] 1 containers: [b2d4b51be804]
	I0923 03:44:16.979937    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 03:44:16.990283    9267 logs.go:276] 1 containers: [32da14a038ff]
	I0923 03:44:16.990364    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 03:44:17.000476    9267 logs.go:276] 0 containers: []
	W0923 03:44:17.000491    9267 logs.go:278] No container was found matching "kindnet"
	I0923 03:44:17.000554    9267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 03:44:17.018723    9267 logs.go:276] 1 containers: [89e34588c991]
	I0923 03:44:17.018739    9267 logs.go:123] Gathering logs for kubelet ...
	I0923 03:44:17.018745    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 03:44:17.055036    9267 logs.go:123] Gathering logs for kube-apiserver [ebe6021a97cb] ...
	I0923 03:44:17.055042    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe6021a97cb"
	I0923 03:44:17.069463    9267 logs.go:123] Gathering logs for coredns [538c617e2413] ...
	I0923 03:44:17.069473    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538c617e2413"
	I0923 03:44:17.083687    9267 logs.go:123] Gathering logs for coredns [cfb02c930992] ...
	I0923 03:44:17.083698    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb02c930992"
	I0923 03:44:17.095858    9267 logs.go:123] Gathering logs for describe nodes ...
	I0923 03:44:17.095869    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 03:44:17.130917    9267 logs.go:123] Gathering logs for etcd [67ed40a9b54a] ...
	I0923 03:44:17.130931    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ed40a9b54a"
	I0923 03:44:17.145174    9267 logs.go:123] Gathering logs for Docker ...
	I0923 03:44:17.145185    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 03:44:17.169657    9267 logs.go:123] Gathering logs for container status ...
	I0923 03:44:17.169668    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 03:44:17.182775    9267 logs.go:123] Gathering logs for dmesg ...
	I0923 03:44:17.182787    9267 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 03:44:17.187656    9267 logs.go:123] Gathering logs for coredns [6e2fcbbad4a7] ...
	I0923 03:44:17.187671    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e2fcbbad4a7"
	I0923 03:44:17.208990    9267 logs.go:123] Gathering logs for coredns [261ba45465c0] ...
	I0923 03:44:17.209002    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261ba45465c0"
	I0923 03:44:17.222664    9267 logs.go:123] Gathering logs for kube-scheduler [f8dbafe1b8f8] ...
	I0923 03:44:17.222680    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8dbafe1b8f8"
	I0923 03:44:17.239176    9267 logs.go:123] Gathering logs for kube-proxy [b2d4b51be804] ...
	I0923 03:44:17.239198    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d4b51be804"
	I0923 03:44:17.252447    9267 logs.go:123] Gathering logs for kube-controller-manager [32da14a038ff] ...
	I0923 03:44:17.252461    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32da14a038ff"
	I0923 03:44:17.272189    9267 logs.go:123] Gathering logs for storage-provisioner [89e34588c991] ...
	I0923 03:44:17.272209    9267 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89e34588c991"
	I0923 03:44:19.786877    9267 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 03:44:24.789036    9267 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 03:44:24.795302    9267 out.go:201] 
	W0923 03:44:24.799205    9267 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 03:44:24.799235    9267 out.go:270] * 
	* 
	W0923 03:44:24.802150    9267 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:24.821092    9267 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-516000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.16s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-497000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-497000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.926228084s)

                                                
                                                
-- stdout --
	* [pause-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-497000" primary control-plane node in "pause-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-497000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-497000 -n pause-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-497000 -n pause-497000: exit status 7 (60.970417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 : exit status 80 (9.816765208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-346000" primary control-plane node in "NoKubernetes-346000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-346000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-346000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000: exit status 7 (50.140667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-346000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240323584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-346000
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-346000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000: exit status 7 (57.239959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-346000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251779916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-346000
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-346000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000: exit status 7 (59.88525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-346000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 : exit status 80 (5.25314625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-346000
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-346000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-346000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-346000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-346000 -n NoKubernetes-346000: exit status 7 (65.133041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-346000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.88626775s)

                                                
                                                
-- stdout --
	* [auto-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-165000" primary control-plane node in "auto-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:42:32.602735    9735 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:42:32.602869    9735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:32.602872    9735 out.go:358] Setting ErrFile to fd 2...
	I0923 03:42:32.602874    9735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:32.603021    9735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:42:32.604114    9735 out.go:352] Setting JSON to false
	I0923 03:42:32.620558    9735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6123,"bootTime":1727082029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:42:32.620668    9735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:42:32.625837    9735 out.go:177] * [auto-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:42:32.633768    9735 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:42:32.633807    9735 notify.go:220] Checking for updates...
	I0923 03:42:32.640715    9735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:42:32.643774    9735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:42:32.646762    9735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:42:32.649707    9735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:42:32.652743    9735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:42:32.656165    9735 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:42:32.656232    9735 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:42:32.656276    9735 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:42:32.660714    9735 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:42:32.667730    9735 start.go:297] selected driver: qemu2
	I0923 03:42:32.667736    9735 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:42:32.667743    9735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:42:32.670239    9735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:42:32.672729    9735 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:42:32.676778    9735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:42:32.676796    9735 cni.go:84] Creating CNI manager for ""
	I0923 03:42:32.676816    9735 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:42:32.676820    9735 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:42:32.676851    9735 start.go:340] cluster config:
	{Name:auto-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:42:32.680623    9735 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:42:32.688728    9735 out.go:177] * Starting "auto-165000" primary control-plane node in "auto-165000" cluster
	I0923 03:42:32.692718    9735 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:42:32.692743    9735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:42:32.692756    9735 cache.go:56] Caching tarball of preloaded images
	I0923 03:42:32.692831    9735 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:42:32.692843    9735 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:42:32.692909    9735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/auto-165000/config.json ...
	I0923 03:42:32.692920    9735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/auto-165000/config.json: {Name:mk3f371639c98185a3bd6216f1a63a455aa68e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:42:32.693291    9735 start.go:360] acquireMachinesLock for auto-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:42:32.693325    9735 start.go:364] duration metric: took 28µs to acquireMachinesLock for "auto-165000"
	I0923 03:42:32.693337    9735 start.go:93] Provisioning new machine with config: &{Name:auto-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:42:32.693379    9735 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:42:32.701775    9735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:42:32.719941    9735 start.go:159] libmachine.API.Create for "auto-165000" (driver="qemu2")
	I0923 03:42:32.719980    9735 client.go:168] LocalClient.Create starting
	I0923 03:42:32.720052    9735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:42:32.720085    9735 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:32.720095    9735 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:32.720142    9735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:42:32.720166    9735 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:32.720174    9735 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:32.720561    9735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:42:32.885076    9735 main.go:141] libmachine: Creating SSH key...
	I0923 03:42:32.942924    9735 main.go:141] libmachine: Creating Disk image...
	I0923 03:42:32.942930    9735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:42:32.943138    9735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:32.952529    9735 main.go:141] libmachine: STDOUT: 
	I0923 03:42:32.952551    9735 main.go:141] libmachine: STDERR: 
	I0923 03:42:32.952623    9735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2 +20000M
	I0923 03:42:32.960438    9735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:42:32.960453    9735 main.go:141] libmachine: STDERR: 
	I0923 03:42:32.960486    9735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:32.960492    9735 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:42:32.960505    9735 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:42:32.960529    9735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3f:4a:07:e4:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:32.962192    9735 main.go:141] libmachine: STDOUT: 
	I0923 03:42:32.962206    9735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:42:32.962227    9735 client.go:171] duration metric: took 242.246416ms to LocalClient.Create
	I0923 03:42:34.964306    9735 start.go:128] duration metric: took 2.270965958s to createHost
	I0923 03:42:34.964319    9735 start.go:83] releasing machines lock for "auto-165000", held for 2.27104s
	W0923 03:42:34.964335    9735 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:34.977246    9735 out.go:177] * Deleting "auto-165000" in qemu2 ...
	W0923 03:42:34.988699    9735 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:34.988713    9735 start.go:729] Will try again in 5 seconds ...
	I0923 03:42:39.990838    9735 start.go:360] acquireMachinesLock for auto-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:42:39.991319    9735 start.go:364] duration metric: took 385.542µs to acquireMachinesLock for "auto-165000"
	I0923 03:42:39.991427    9735 start.go:93] Provisioning new machine with config: &{Name:auto-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:42:39.991648    9735 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:42:39.998331    9735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:42:40.041247    9735 start.go:159] libmachine.API.Create for "auto-165000" (driver="qemu2")
	I0923 03:42:40.041300    9735 client.go:168] LocalClient.Create starting
	I0923 03:42:40.041423    9735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:42:40.041484    9735 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:40.041498    9735 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:40.041559    9735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:42:40.041598    9735 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:40.041615    9735 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:40.042066    9735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:42:40.212398    9735 main.go:141] libmachine: Creating SSH key...
	I0923 03:42:40.394287    9735 main.go:141] libmachine: Creating Disk image...
	I0923 03:42:40.394296    9735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:42:40.394499    9735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:40.404859    9735 main.go:141] libmachine: STDOUT: 
	I0923 03:42:40.404874    9735 main.go:141] libmachine: STDERR: 
	I0923 03:42:40.404939    9735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2 +20000M
	I0923 03:42:40.413305    9735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:42:40.413322    9735 main.go:141] libmachine: STDERR: 
	I0923 03:42:40.413332    9735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:40.413338    9735 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:42:40.413348    9735 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:42:40.413381    9735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:35:94:77:ef:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/auto-165000/disk.qcow2
	I0923 03:42:40.415209    9735 main.go:141] libmachine: STDOUT: 
	I0923 03:42:40.415225    9735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:42:40.415240    9735 client.go:171] duration metric: took 373.939666ms to LocalClient.Create
	I0923 03:42:42.417252    9735 start.go:128] duration metric: took 2.425628667s to createHost
	I0923 03:42:42.417269    9735 start.go:83] releasing machines lock for "auto-165000", held for 2.425987s
	W0923 03:42:42.417374    9735 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:42.433608    9735 out.go:201] 
	W0923 03:42:42.437594    9735 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:42:42.437599    9735 out.go:270] * 
	* 
	W0923 03:42:42.438046    9735 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:42:42.453540    9735 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.954106292s)

                                                
                                                
-- stdout --
	* [kindnet-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-165000" primary control-plane node in "kindnet-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:42:44.652441    9850 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:42:44.652576    9850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:44.652580    9850 out.go:358] Setting ErrFile to fd 2...
	I0923 03:42:44.652582    9850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:44.652722    9850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:42:44.653806    9850 out.go:352] Setting JSON to false
	I0923 03:42:44.669990    9850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6135,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:42:44.670059    9850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:42:44.677300    9850 out.go:177] * [kindnet-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:42:44.684172    9850 notify.go:220] Checking for updates...
	I0923 03:42:44.688178    9850 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:42:44.691167    9850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:42:44.694173    9850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:42:44.698152    9850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:42:44.701218    9850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:42:44.704149    9850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:42:44.707538    9850 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:42:44.707606    9850 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:42:44.707650    9850 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:42:44.715153    9850 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:42:44.722101    9850 start.go:297] selected driver: qemu2
	I0923 03:42:44.722106    9850 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:42:44.722112    9850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:42:44.724189    9850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:42:44.727170    9850 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:42:44.730172    9850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:42:44.730188    9850 cni.go:84] Creating CNI manager for "kindnet"
	I0923 03:42:44.730194    9850 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 03:42:44.730222    9850 start.go:340] cluster config:
	{Name:kindnet-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:42:44.733570    9850 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:42:44.741141    9850 out.go:177] * Starting "kindnet-165000" primary control-plane node in "kindnet-165000" cluster
	I0923 03:42:44.745129    9850 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:42:44.745143    9850 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:42:44.745150    9850 cache.go:56] Caching tarball of preloaded images
	I0923 03:42:44.745210    9850 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:42:44.745215    9850 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:42:44.745283    9850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kindnet-165000/config.json ...
	I0923 03:42:44.745297    9850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kindnet-165000/config.json: {Name:mk49de7b68995b459bc6f8b04e8377946e66519e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:42:44.745624    9850 start.go:360] acquireMachinesLock for kindnet-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:42:44.745653    9850 start.go:364] duration metric: took 23.709µs to acquireMachinesLock for "kindnet-165000"
	I0923 03:42:44.745664    9850 start.go:93] Provisioning new machine with config: &{Name:kindnet-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:42:44.745689    9850 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:42:44.751122    9850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:42:44.766949    9850 start.go:159] libmachine.API.Create for "kindnet-165000" (driver="qemu2")
	I0923 03:42:44.766978    9850 client.go:168] LocalClient.Create starting
	I0923 03:42:44.767036    9850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:42:44.767068    9850 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:44.767079    9850 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:44.767119    9850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:42:44.767144    9850 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:44.767154    9850 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:44.767545    9850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:42:44.931899    9850 main.go:141] libmachine: Creating SSH key...
	I0923 03:42:45.001240    9850 main.go:141] libmachine: Creating Disk image...
	I0923 03:42:45.001246    9850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:42:45.001431    9850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:45.011016    9850 main.go:141] libmachine: STDOUT: 
	I0923 03:42:45.011033    9850 main.go:141] libmachine: STDERR: 
	I0923 03:42:45.011120    9850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2 +20000M
	I0923 03:42:45.019575    9850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:42:45.019593    9850 main.go:141] libmachine: STDERR: 
	I0923 03:42:45.019619    9850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:45.019623    9850 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:42:45.019636    9850 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:42:45.019663    9850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ce:a6:89:bb:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:45.021407    9850 main.go:141] libmachine: STDOUT: 
	I0923 03:42:45.021424    9850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:42:45.021446    9850 client.go:171] duration metric: took 254.468333ms to LocalClient.Create
	I0923 03:42:47.023517    9850 start.go:128] duration metric: took 2.27786525s to createHost
	I0923 03:42:47.023554    9850 start.go:83] releasing machines lock for "kindnet-165000", held for 2.277945916s
	W0923 03:42:47.023579    9850 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:47.044003    9850 out.go:177] * Deleting "kindnet-165000" in qemu2 ...
	W0923 03:42:47.067369    9850 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:47.067382    9850 start.go:729] Will try again in 5 seconds ...
	I0923 03:42:52.069467    9850 start.go:360] acquireMachinesLock for kindnet-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:42:52.069621    9850 start.go:364] duration metric: took 104.875µs to acquireMachinesLock for "kindnet-165000"
	I0923 03:42:52.069640    9850 start.go:93] Provisioning new machine with config: &{Name:kindnet-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:42:52.069703    9850 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:42:52.079049    9850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:42:52.098504    9850 start.go:159] libmachine.API.Create for "kindnet-165000" (driver="qemu2")
	I0923 03:42:52.098547    9850 client.go:168] LocalClient.Create starting
	I0923 03:42:52.098608    9850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:42:52.098646    9850 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:52.098659    9850 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:52.098699    9850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:42:52.098726    9850 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:52.098735    9850 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:52.098998    9850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:42:52.264122    9850 main.go:141] libmachine: Creating SSH key...
	I0923 03:42:52.514640    9850 main.go:141] libmachine: Creating Disk image...
	I0923 03:42:52.514649    9850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:42:52.514873    9850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:52.524735    9850 main.go:141] libmachine: STDOUT: 
	I0923 03:42:52.524758    9850 main.go:141] libmachine: STDERR: 
	I0923 03:42:52.524821    9850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2 +20000M
	I0923 03:42:52.532767    9850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:42:52.532783    9850 main.go:141] libmachine: STDERR: 
	I0923 03:42:52.532803    9850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:52.532809    9850 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:42:52.532815    9850 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:42:52.532854    9850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:7a:0e:bd:45:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kindnet-165000/disk.qcow2
	I0923 03:42:52.534574    9850 main.go:141] libmachine: STDOUT: 
	I0923 03:42:52.534592    9850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:42:52.534604    9850 client.go:171] duration metric: took 436.063417ms to LocalClient.Create
	I0923 03:42:54.536902    9850 start.go:128] duration metric: took 2.467187834s to createHost
	I0923 03:42:54.536994    9850 start.go:83] releasing machines lock for "kindnet-165000", held for 2.467414959s
	W0923 03:42:54.537337    9850 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:54.547022    9850 out.go:201] 
	W0923 03:42:54.554151    9850 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:42:54.554171    9850 out.go:270] * 
	* 
	W0923 03:42:54.556427    9850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:42:54.565093    9850 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.053082542s)

                                                
                                                
-- stdout --
	* [calico-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-165000" primary control-plane node in "calico-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:42:56.896420    9973 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:42:56.896545    9973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:56.896548    9973 out.go:358] Setting ErrFile to fd 2...
	I0923 03:42:56.896551    9973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:42:56.896683    9973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:42:56.897754    9973 out.go:352] Setting JSON to false
	I0923 03:42:56.914308    9973 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6147,"bootTime":1727082029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:42:56.914374    9973 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:42:56.921602    9973 out.go:177] * [calico-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:42:56.929325    9973 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:42:56.929363    9973 notify.go:220] Checking for updates...
	I0923 03:42:56.937368    9973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:42:56.940373    9973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:42:56.944333    9973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:42:56.947313    9973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:42:56.950311    9973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:42:56.953716    9973 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:42:56.953781    9973 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:42:56.953832    9973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:42:56.956289    9973 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:42:56.963351    9973 start.go:297] selected driver: qemu2
	I0923 03:42:56.963356    9973 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:42:56.963361    9973 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:42:56.965666    9973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:42:56.966997    9973 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:42:56.971376    9973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:42:56.971393    9973 cni.go:84] Creating CNI manager for "calico"
	I0923 03:42:56.971396    9973 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0923 03:42:56.971424    9973 start.go:340] cluster config:
	{Name:calico-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:42:56.974863    9973 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:42:56.979376    9973 out.go:177] * Starting "calico-165000" primary control-plane node in "calico-165000" cluster
	I0923 03:42:56.987334    9973 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:42:56.987349    9973 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:42:56.987358    9973 cache.go:56] Caching tarball of preloaded images
	I0923 03:42:56.987413    9973 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:42:56.987418    9973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:42:56.987463    9973 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/calico-165000/config.json ...
	I0923 03:42:56.987473    9973 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/calico-165000/config.json: {Name:mk6b09e368362000b3ac4ae92683c812f3a711e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:42:56.987676    9973 start.go:360] acquireMachinesLock for calico-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:42:56.987707    9973 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "calico-165000"
	I0923 03:42:56.987718    9973 start.go:93] Provisioning new machine with config: &{Name:calico-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:42:56.987747    9973 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:42:56.995401    9973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:42:57.010571    9973 start.go:159] libmachine.API.Create for "calico-165000" (driver="qemu2")
	I0923 03:42:57.010601    9973 client.go:168] LocalClient.Create starting
	I0923 03:42:57.010663    9973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:42:57.010695    9973 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:57.010706    9973 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:57.010746    9973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:42:57.010769    9973 main.go:141] libmachine: Decoding PEM data...
	I0923 03:42:57.010777    9973 main.go:141] libmachine: Parsing certificate...
	I0923 03:42:57.011119    9973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:42:57.172502    9973 main.go:141] libmachine: Creating SSH key...
	I0923 03:42:57.419909    9973 main.go:141] libmachine: Creating Disk image...
	I0923 03:42:57.419919    9973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:42:57.420135    9973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:42:57.430086    9973 main.go:141] libmachine: STDOUT: 
	I0923 03:42:57.430101    9973 main.go:141] libmachine: STDERR: 
	I0923 03:42:57.430178    9973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2 +20000M
	I0923 03:42:57.438483    9973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:42:57.438500    9973 main.go:141] libmachine: STDERR: 
	I0923 03:42:57.438518    9973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:42:57.438527    9973 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:42:57.438541    9973 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:42:57.438565    9973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0f:e4:96:3b:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:42:57.440470    9973 main.go:141] libmachine: STDOUT: 
	I0923 03:42:57.440492    9973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:42:57.440514    9973 client.go:171] duration metric: took 429.91675ms to LocalClient.Create
	I0923 03:42:59.442755    9973 start.go:128] duration metric: took 2.455035875s to createHost
	I0923 03:42:59.442836    9973 start.go:83] releasing machines lock for "calico-165000", held for 2.455175959s
	W0923 03:42:59.442947    9973 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:59.449847    9973 out.go:177] * Deleting "calico-165000" in qemu2 ...
	W0923 03:42:59.479222    9973 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:42:59.479248    9973 start.go:729] Will try again in 5 seconds ...
	I0923 03:43:04.481388    9973 start.go:360] acquireMachinesLock for calico-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:04.481859    9973 start.go:364] duration metric: took 340.042µs to acquireMachinesLock for "calico-165000"
	I0923 03:43:04.481957    9973 start.go:93] Provisioning new machine with config: &{Name:calico-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:04.482237    9973 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:04.487819    9973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:04.529326    9973 start.go:159] libmachine.API.Create for "calico-165000" (driver="qemu2")
	I0923 03:43:04.529383    9973 client.go:168] LocalClient.Create starting
	I0923 03:43:04.529510    9973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:04.529595    9973 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:04.529612    9973 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:04.529675    9973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:04.529721    9973 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:04.529732    9973 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:04.530267    9973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:04.704772    9973 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:04.851982    9973 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:04.851994    9973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:04.852224    9973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:43:04.861789    9973 main.go:141] libmachine: STDOUT: 
	I0923 03:43:04.861806    9973 main.go:141] libmachine: STDERR: 
	I0923 03:43:04.861866    9973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2 +20000M
	I0923 03:43:04.869865    9973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:04.869879    9973 main.go:141] libmachine: STDERR: 
	I0923 03:43:04.869892    9973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:43:04.869897    9973 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:04.869905    9973 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:04.869948    9973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:1d:4f:96:b4:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/calico-165000/disk.qcow2
	I0923 03:43:04.871661    9973 main.go:141] libmachine: STDOUT: 
	I0923 03:43:04.871675    9973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:04.871686    9973 client.go:171] duration metric: took 342.302292ms to LocalClient.Create
	I0923 03:43:06.873828    9973 start.go:128] duration metric: took 2.391584709s to createHost
	I0923 03:43:06.873929    9973 start.go:83] releasing machines lock for "calico-165000", held for 2.392099834s
	W0923 03:43:06.874315    9973 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:06.888349    9973 out.go:201] 
	W0923 03:43:06.892515    9973 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:43:06.892554    9973 out.go:270] * 
	* 
	W0923 03:43:06.894616    9973 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:43:06.907427    9973 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.805939417s)

                                                
                                                
-- stdout --
	* [custom-flannel-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-165000" primary control-plane node in "custom-flannel-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:43:09.352404   10100 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:43:09.352548   10100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:09.352552   10100 out.go:358] Setting ErrFile to fd 2...
	I0923 03:43:09.352554   10100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:09.352699   10100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:43:09.353816   10100 out.go:352] Setting JSON to false
	I0923 03:43:09.370276   10100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6160,"bootTime":1727082029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:43:09.370342   10100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:43:09.376904   10100 out.go:177] * [custom-flannel-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:43:09.385147   10100 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:43:09.385209   10100 notify.go:220] Checking for updates...
	I0923 03:43:09.393021   10100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:43:09.396075   10100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:43:09.400017   10100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:43:09.403074   10100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:43:09.406075   10100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:43:09.409284   10100 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:43:09.409355   10100 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:43:09.409412   10100 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:43:09.413037   10100 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:43:09.419072   10100 start.go:297] selected driver: qemu2
	I0923 03:43:09.419082   10100 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:43:09.419091   10100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:43:09.421566   10100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:43:09.424999   10100 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:43:09.428112   10100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:43:09.428129   10100 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0923 03:43:09.428136   10100 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0923 03:43:09.428164   10100 start.go:340] cluster config:
	{Name:custom-flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:43:09.431857   10100 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:43:09.439069   10100 out.go:177] * Starting "custom-flannel-165000" primary control-plane node in "custom-flannel-165000" cluster
	I0923 03:43:09.443080   10100 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:43:09.443108   10100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:43:09.443118   10100 cache.go:56] Caching tarball of preloaded images
	I0923 03:43:09.443186   10100 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:43:09.443191   10100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:43:09.443250   10100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/custom-flannel-165000/config.json ...
	I0923 03:43:09.443260   10100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/custom-flannel-165000/config.json: {Name:mk74274c6df5301ad691d7632bba356d4edd90c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:43:09.443466   10100 start.go:360] acquireMachinesLock for custom-flannel-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:09.443504   10100 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "custom-flannel-165000"
	I0923 03:43:09.443516   10100 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:09.443542   10100 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:09.447148   10100 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:09.462859   10100 start.go:159] libmachine.API.Create for "custom-flannel-165000" (driver="qemu2")
	I0923 03:43:09.462893   10100 client.go:168] LocalClient.Create starting
	I0923 03:43:09.462956   10100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:09.462989   10100 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:09.463000   10100 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:09.463035   10100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:09.463058   10100 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:09.463064   10100 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:09.463436   10100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:09.625915   10100 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:09.729981   10100 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:09.729989   10100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:09.730171   10100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:09.739660   10100 main.go:141] libmachine: STDOUT: 
	I0923 03:43:09.739678   10100 main.go:141] libmachine: STDERR: 
	I0923 03:43:09.739737   10100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2 +20000M
	I0923 03:43:09.747658   10100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:09.747671   10100 main.go:141] libmachine: STDERR: 
	I0923 03:43:09.747695   10100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:09.747702   10100 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:09.747715   10100 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:09.747740   10100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:85:8e:68:f6:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:09.749436   10100 main.go:141] libmachine: STDOUT: 
	I0923 03:43:09.749453   10100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:09.749469   10100 client.go:171] duration metric: took 286.577792ms to LocalClient.Create
	I0923 03:43:11.751633   10100 start.go:128] duration metric: took 2.308110083s to createHost
	I0923 03:43:11.751702   10100 start.go:83] releasing machines lock for "custom-flannel-165000", held for 2.308239s
	W0923 03:43:11.751795   10100 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:11.771331   10100 out.go:177] * Deleting "custom-flannel-165000" in qemu2 ...
	W0923 03:43:11.801264   10100 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:11.801287   10100 start.go:729] Will try again in 5 seconds ...
	I0923 03:43:16.803351   10100 start.go:360] acquireMachinesLock for custom-flannel-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:16.803554   10100 start.go:364] duration metric: took 147.458µs to acquireMachinesLock for "custom-flannel-165000"
	I0923 03:43:16.803582   10100 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:16.803702   10100 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:16.815992   10100 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:16.844499   10100 start.go:159] libmachine.API.Create for "custom-flannel-165000" (driver="qemu2")
	I0923 03:43:16.844541   10100 client.go:168] LocalClient.Create starting
	I0923 03:43:16.844651   10100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:16.844708   10100 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:16.844721   10100 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:16.844777   10100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:16.844808   10100 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:16.844818   10100 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:16.845389   10100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:17.009806   10100 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:17.071150   10100 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:17.071157   10100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:17.071357   10100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:17.080941   10100 main.go:141] libmachine: STDOUT: 
	I0923 03:43:17.080959   10100 main.go:141] libmachine: STDERR: 
	I0923 03:43:17.081016   10100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2 +20000M
	I0923 03:43:17.089583   10100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:17.089610   10100 main.go:141] libmachine: STDERR: 
	I0923 03:43:17.089625   10100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:17.089640   10100 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:17.089646   10100 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:17.089677   10100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:39:b4:44:c2:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/custom-flannel-165000/disk.qcow2
	I0923 03:43:17.091650   10100 main.go:141] libmachine: STDOUT: 
	I0923 03:43:17.091665   10100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:17.091678   10100 client.go:171] duration metric: took 247.136458ms to LocalClient.Create
	I0923 03:43:19.093735   10100 start.go:128] duration metric: took 2.29007025s to createHost
	I0923 03:43:19.093763   10100 start.go:83] releasing machines lock for "custom-flannel-165000", held for 2.290248708s
	W0923 03:43:19.093949   10100 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:19.103370   10100 out.go:201] 
	W0923 03:43:19.109281   10100 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:43:19.109329   10100 out.go:270] * 
	* 
	W0923 03:43:19.110489   10100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:43:19.121389   10100 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.835488541s)

                                                
                                                
-- stdout --
	* [false-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-165000" primary control-plane node in "false-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:43:21.521934   10226 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:43:21.522095   10226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:21.522098   10226 out.go:358] Setting ErrFile to fd 2...
	I0923 03:43:21.522101   10226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:21.522240   10226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:43:21.523296   10226 out.go:352] Setting JSON to false
	I0923 03:43:21.539515   10226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6172,"bootTime":1727082029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:43:21.539599   10226 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:43:21.545783   10226 out.go:177] * [false-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:43:21.553637   10226 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:43:21.553709   10226 notify.go:220] Checking for updates...
	I0923 03:43:21.561501   10226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:43:21.564582   10226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:43:21.567596   10226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:43:21.568969   10226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:43:21.571560   10226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:43:21.574993   10226 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:43:21.575059   10226 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:43:21.575110   10226 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:43:21.579393   10226 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:43:21.586545   10226 start.go:297] selected driver: qemu2
	I0923 03:43:21.586551   10226 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:43:21.586557   10226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:43:21.588801   10226 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:43:21.592383   10226 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:43:21.595665   10226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:43:21.595681   10226 cni.go:84] Creating CNI manager for "false"
	I0923 03:43:21.595717   10226 start.go:340] cluster config:
	{Name:false-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:43:21.599216   10226 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:43:21.606545   10226 out.go:177] * Starting "false-165000" primary control-plane node in "false-165000" cluster
	I0923 03:43:21.610524   10226 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:43:21.610538   10226 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:43:21.610545   10226 cache.go:56] Caching tarball of preloaded images
	I0923 03:43:21.610594   10226 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:43:21.610599   10226 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:43:21.610644   10226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/false-165000/config.json ...
	I0923 03:43:21.610654   10226 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/false-165000/config.json: {Name:mk65c96cf5715182ec088564bc90fa76c36dec2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:43:21.610852   10226 start.go:360] acquireMachinesLock for false-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:21.610885   10226 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "false-165000"
	I0923 03:43:21.610896   10226 start.go:93] Provisioning new machine with config: &{Name:false-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:21.610928   10226 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:21.618567   10226 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:21.634334   10226 start.go:159] libmachine.API.Create for "false-165000" (driver="qemu2")
	I0923 03:43:21.634366   10226 client.go:168] LocalClient.Create starting
	I0923 03:43:21.634434   10226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:21.634464   10226 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:21.634474   10226 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:21.634514   10226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:21.634539   10226 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:21.634545   10226 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:21.634913   10226 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:21.798969   10226 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:21.889055   10226 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:21.889065   10226 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:21.889291   10226 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:21.899804   10226 main.go:141] libmachine: STDOUT: 
	I0923 03:43:21.899829   10226 main.go:141] libmachine: STDERR: 
	I0923 03:43:21.899909   10226 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2 +20000M
	I0923 03:43:21.909547   10226 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:21.909575   10226 main.go:141] libmachine: STDERR: 
	I0923 03:43:21.909591   10226 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:21.909598   10226 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:21.909608   10226 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:21.909639   10226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c8:7b:8c:b9:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:21.911616   10226 main.go:141] libmachine: STDOUT: 
	I0923 03:43:21.911633   10226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:21.911655   10226 client.go:171] duration metric: took 277.28875ms to LocalClient.Create
	I0923 03:43:23.913708   10226 start.go:128] duration metric: took 2.302818541s to createHost
	I0923 03:43:23.913774   10226 start.go:83] releasing machines lock for "false-165000", held for 2.302932791s
	W0923 03:43:23.913814   10226 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:23.918955   10226 out.go:177] * Deleting "false-165000" in qemu2 ...
	W0923 03:43:23.936541   10226 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:23.936550   10226 start.go:729] Will try again in 5 seconds ...
	I0923 03:43:28.938768   10226 start.go:360] acquireMachinesLock for false-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:28.939311   10226 start.go:364] duration metric: took 435.041µs to acquireMachinesLock for "false-165000"
	I0923 03:43:28.939462   10226 start.go:93] Provisioning new machine with config: &{Name:false-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:28.939724   10226 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:28.951141   10226 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:29.000192   10226 start.go:159] libmachine.API.Create for "false-165000" (driver="qemu2")
	I0923 03:43:29.000247   10226 client.go:168] LocalClient.Create starting
	I0923 03:43:29.000378   10226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:29.000452   10226 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:29.000468   10226 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:29.000530   10226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:29.000575   10226 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:29.000608   10226 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:29.001176   10226 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:29.174003   10226 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:29.261103   10226 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:29.261110   10226 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:29.261310   10226 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:29.270743   10226 main.go:141] libmachine: STDOUT: 
	I0923 03:43:29.270768   10226 main.go:141] libmachine: STDERR: 
	I0923 03:43:29.270818   10226 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2 +20000M
	I0923 03:43:29.278863   10226 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:29.278883   10226 main.go:141] libmachine: STDERR: 
	I0923 03:43:29.278893   10226 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:29.278907   10226 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:29.278917   10226 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:29.278942   10226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:be:16:e8:dc:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/false-165000/disk.qcow2
	I0923 03:43:29.280689   10226 main.go:141] libmachine: STDOUT: 
	I0923 03:43:29.280704   10226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:29.280716   10226 client.go:171] duration metric: took 280.467833ms to LocalClient.Create
	I0923 03:43:31.282928   10226 start.go:128] duration metric: took 2.343222208s to createHost
	I0923 03:43:31.282989   10226 start.go:83] releasing machines lock for "false-165000", held for 2.3437045s
	W0923 03:43:31.283261   10226 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:31.296409   10226 out.go:201] 
	W0923 03:43:31.300510   10226 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:43:31.300537   10226 out.go:270] * 
	* 
	W0923 03:43:31.302498   10226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:43:31.315421   10226 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.849326958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-165000" primary control-plane node in "enable-default-cni-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:43:33.492400   10347 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:43:33.492525   10347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:33.492529   10347 out.go:358] Setting ErrFile to fd 2...
	I0923 03:43:33.492531   10347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:33.492666   10347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:43:33.493836   10347 out.go:352] Setting JSON to false
	I0923 03:43:33.510026   10347 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6184,"bootTime":1727082029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:43:33.510099   10347 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:43:33.515556   10347 out.go:177] * [enable-default-cni-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:43:33.521075   10347 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:43:33.521153   10347 notify.go:220] Checking for updates...
	I0923 03:43:33.527367   10347 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:43:33.528804   10347 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:43:33.532393   10347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:43:33.535428   10347 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:43:33.538397   10347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:43:33.541707   10347 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:43:33.541771   10347 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:43:33.541814   10347 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:43:33.545367   10347 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:43:33.552367   10347 start.go:297] selected driver: qemu2
	I0923 03:43:33.552373   10347 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:43:33.552379   10347 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:43:33.554534   10347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:43:33.558357   10347 out.go:177] * Automatically selected the socket_vmnet network
	E0923 03:43:33.561472   10347 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0923 03:43:33.561487   10347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:43:33.561508   10347 cni.go:84] Creating CNI manager for "bridge"
	I0923 03:43:33.561520   10347 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:43:33.561553   10347 start.go:340] cluster config:
	{Name:enable-default-cni-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:43:33.565087   10347 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:43:33.572416   10347 out.go:177] * Starting "enable-default-cni-165000" primary control-plane node in "enable-default-cni-165000" cluster
	I0923 03:43:33.576362   10347 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:43:33.576382   10347 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:43:33.576388   10347 cache.go:56] Caching tarball of preloaded images
	I0923 03:43:33.576460   10347 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:43:33.576465   10347 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:43:33.576547   10347 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/enable-default-cni-165000/config.json ...
	I0923 03:43:33.576568   10347 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/enable-default-cni-165000/config.json: {Name:mkfbd0bb84b2bdea2f915910bad78ead5c245795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:43:33.576880   10347 start.go:360] acquireMachinesLock for enable-default-cni-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:33.576929   10347 start.go:364] duration metric: took 36.5µs to acquireMachinesLock for "enable-default-cni-165000"
	I0923 03:43:33.576944   10347 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:33.576974   10347 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:33.583306   10347 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:33.600047   10347 start.go:159] libmachine.API.Create for "enable-default-cni-165000" (driver="qemu2")
	I0923 03:43:33.600075   10347 client.go:168] LocalClient.Create starting
	I0923 03:43:33.600139   10347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:33.600169   10347 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:33.600177   10347 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:33.600214   10347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:33.600236   10347 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:33.600246   10347 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:33.600690   10347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:33.764097   10347 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:33.898936   10347 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:33.898947   10347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:33.899150   10347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:33.908429   10347 main.go:141] libmachine: STDOUT: 
	I0923 03:43:33.908451   10347 main.go:141] libmachine: STDERR: 
	I0923 03:43:33.908523   10347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2 +20000M
	I0923 03:43:33.916396   10347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:33.916413   10347 main.go:141] libmachine: STDERR: 
	I0923 03:43:33.916438   10347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:33.916445   10347 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:33.916457   10347 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:33.916484   10347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c0:26:9c:6f:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:33.918176   10347 main.go:141] libmachine: STDOUT: 
	I0923 03:43:33.918190   10347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:33.918207   10347 client.go:171] duration metric: took 318.134459ms to LocalClient.Create
	I0923 03:43:35.920378   10347 start.go:128] duration metric: took 2.343417958s to createHost
	I0923 03:43:35.920483   10347 start.go:83] releasing machines lock for "enable-default-cni-165000", held for 2.343594166s
	W0923 03:43:35.920560   10347 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:35.939726   10347 out.go:177] * Deleting "enable-default-cni-165000" in qemu2 ...
	W0923 03:43:35.971674   10347 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:35.971700   10347 start.go:729] Will try again in 5 seconds ...
	I0923 03:43:40.973723   10347 start.go:360] acquireMachinesLock for enable-default-cni-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:40.974098   10347 start.go:364] duration metric: took 316.709µs to acquireMachinesLock for "enable-default-cni-165000"
	I0923 03:43:40.974145   10347 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:40.974345   10347 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:40.983584   10347 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:41.019914   10347 start.go:159] libmachine.API.Create for "enable-default-cni-165000" (driver="qemu2")
	I0923 03:43:41.019973   10347 client.go:168] LocalClient.Create starting
	I0923 03:43:41.020074   10347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:41.020141   10347 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:41.020158   10347 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:41.020222   10347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:41.020262   10347 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:41.020274   10347 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:41.020837   10347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:41.189520   10347 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:41.247542   10347 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:41.247549   10347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:41.247738   10347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:41.256840   10347 main.go:141] libmachine: STDOUT: 
	I0923 03:43:41.256858   10347 main.go:141] libmachine: STDERR: 
	I0923 03:43:41.256915   10347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2 +20000M
	I0923 03:43:41.265157   10347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:41.265173   10347 main.go:141] libmachine: STDERR: 
	I0923 03:43:41.265190   10347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:41.265197   10347 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:41.265207   10347 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:41.265233   10347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8c:94:f2:73:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/enable-default-cni-165000/disk.qcow2
	I0923 03:43:41.267002   10347 main.go:141] libmachine: STDOUT: 
	I0923 03:43:41.267015   10347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:41.267031   10347 client.go:171] duration metric: took 247.05775ms to LocalClient.Create
	I0923 03:43:43.269205   10347 start.go:128] duration metric: took 2.294876916s to createHost
	I0923 03:43:43.269323   10347 start.go:83] releasing machines lock for "enable-default-cni-165000", held for 2.295254125s
	W0923 03:43:43.269753   10347 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:43.278231   10347 out.go:201] 
	W0923 03:43:43.286491   10347 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:43:43.286526   10347 out.go:270] * 
	* 
	W0923 03:43:43.289518   10347 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:43:43.298368   10347 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.875659875s)

                                                
                                                
-- stdout --
	* [flannel-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-165000" primary control-plane node in "flannel-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:43:45.469783   10462 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:43:45.469916   10462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:45.469922   10462 out.go:358] Setting ErrFile to fd 2...
	I0923 03:43:45.469924   10462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:45.470086   10462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:43:45.471415   10462 out.go:352] Setting JSON to false
	I0923 03:43:45.489507   10462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6196,"bootTime":1727082029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:43:45.489593   10462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:43:45.494165   10462 out.go:177] * [flannel-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:43:45.502171   10462 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:43:45.502287   10462 notify.go:220] Checking for updates...
	I0923 03:43:45.510079   10462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:43:45.513101   10462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:43:45.515998   10462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:43:45.519027   10462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:43:45.522098   10462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:43:45.523591   10462 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:43:45.523655   10462 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:43:45.523710   10462 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:43:45.528036   10462 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:43:45.534906   10462 start.go:297] selected driver: qemu2
	I0923 03:43:45.534910   10462 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:43:45.534919   10462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:43:45.537144   10462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:43:45.540024   10462 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:43:45.543154   10462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:43:45.543176   10462 cni.go:84] Creating CNI manager for "flannel"
	I0923 03:43:45.543180   10462 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0923 03:43:45.543232   10462 start.go:340] cluster config:
	{Name:flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:43:45.547448   10462 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:43:45.556042   10462 out.go:177] * Starting "flannel-165000" primary control-plane node in "flannel-165000" cluster
	I0923 03:43:45.560087   10462 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:43:45.560118   10462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:43:45.560130   10462 cache.go:56] Caching tarball of preloaded images
	I0923 03:43:45.560223   10462 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:43:45.560230   10462 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:43:45.560304   10462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/flannel-165000/config.json ...
	I0923 03:43:45.560316   10462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/flannel-165000/config.json: {Name:mkd251ab8cb37dbc5d62ddb72fb6a06f37302fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:43:45.560614   10462 start.go:360] acquireMachinesLock for flannel-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:45.560646   10462 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "flannel-165000"
	I0923 03:43:45.560658   10462 start.go:93] Provisioning new machine with config: &{Name:flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:45.560683   10462 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:45.564090   10462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:45.580299   10462 start.go:159] libmachine.API.Create for "flannel-165000" (driver="qemu2")
	I0923 03:43:45.580339   10462 client.go:168] LocalClient.Create starting
	I0923 03:43:45.580411   10462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:45.580450   10462 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:45.580461   10462 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:45.580501   10462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:45.580524   10462 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:45.580532   10462 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:45.580893   10462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:45.742686   10462 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:45.823087   10462 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:45.823094   10462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:45.823296   10462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:45.833132   10462 main.go:141] libmachine: STDOUT: 
	I0923 03:43:45.833153   10462 main.go:141] libmachine: STDERR: 
	I0923 03:43:45.833219   10462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2 +20000M
	I0923 03:43:45.841698   10462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:45.841714   10462 main.go:141] libmachine: STDERR: 
	I0923 03:43:45.841738   10462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:45.841742   10462 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:45.841754   10462 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:45.841784   10462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1e:a8:81:11:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:45.843611   10462 main.go:141] libmachine: STDOUT: 
	I0923 03:43:45.843627   10462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:45.843658   10462 client.go:171] duration metric: took 263.318209ms to LocalClient.Create
	I0923 03:43:47.845934   10462 start.go:128] duration metric: took 2.285275292s to createHost
	I0923 03:43:47.846019   10462 start.go:83] releasing machines lock for "flannel-165000", held for 2.285414667s
	W0923 03:43:47.846094   10462 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:47.859407   10462 out.go:177] * Deleting "flannel-165000" in qemu2 ...
	W0923 03:43:47.891682   10462 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:47.891711   10462 start.go:729] Will try again in 5 seconds ...
	I0923 03:43:52.893964   10462 start.go:360] acquireMachinesLock for flannel-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:52.894548   10462 start.go:364] duration metric: took 474.875µs to acquireMachinesLock for "flannel-165000"
	I0923 03:43:52.894696   10462 start.go:93] Provisioning new machine with config: &{Name:flannel-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:52.894958   10462 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:52.905616   10462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:52.955986   10462 start.go:159] libmachine.API.Create for "flannel-165000" (driver="qemu2")
	I0923 03:43:52.956033   10462 client.go:168] LocalClient.Create starting
	I0923 03:43:52.956157   10462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:52.956234   10462 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:52.956253   10462 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:52.956326   10462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:52.956372   10462 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:52.956391   10462 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:52.956927   10462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:53.135552   10462 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:53.240813   10462 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:53.240820   10462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:53.241038   10462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:53.251074   10462 main.go:141] libmachine: STDOUT: 
	I0923 03:43:53.251095   10462 main.go:141] libmachine: STDERR: 
	I0923 03:43:53.251185   10462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2 +20000M
	I0923 03:43:53.260570   10462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:53.260594   10462 main.go:141] libmachine: STDERR: 
	I0923 03:43:53.260618   10462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:53.260623   10462 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:53.260635   10462 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:53.260663   10462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ea:19:33:6c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/flannel-165000/disk.qcow2
	I0923 03:43:53.262820   10462 main.go:141] libmachine: STDOUT: 
	I0923 03:43:53.262834   10462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:53.262850   10462 client.go:171] duration metric: took 306.816375ms to LocalClient.Create
	I0923 03:43:55.265016   10462 start.go:128] duration metric: took 2.370067208s to createHost
	I0923 03:43:55.265118   10462 start.go:83] releasing machines lock for "flannel-165000", held for 2.370596917s
	W0923 03:43:55.265429   10462 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:43:55.279386   10462 out.go:201] 
	W0923 03:43:55.285374   10462 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:43:55.285398   10462 out.go:270] * 
	* 
	W0923 03:43:55.288007   10462 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:43:55.300245   10462 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.757159041s)

                                                
                                                
-- stdout --
	* [bridge-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-165000" primary control-plane node in "bridge-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:43:57.681520   10584 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:43:57.681639   10584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:57.681642   10584 out.go:358] Setting ErrFile to fd 2...
	I0923 03:43:57.681645   10584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:43:57.681763   10584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:43:57.682873   10584 out.go:352] Setting JSON to false
	I0923 03:43:57.699361   10584 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6208,"bootTime":1727082029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:43:57.699431   10584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:43:57.704007   10584 out.go:177] * [bridge-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:43:57.711801   10584 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:43:57.711872   10584 notify.go:220] Checking for updates...
	I0923 03:43:57.720798   10584 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:43:57.727768   10584 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:43:57.730832   10584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:43:57.734651   10584 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:43:57.737818   10584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:43:57.741184   10584 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:43:57.741245   10584 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:43:57.741309   10584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:43:57.745631   10584 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:43:57.752737   10584 start.go:297] selected driver: qemu2
	I0923 03:43:57.752744   10584 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:43:57.752749   10584 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:43:57.755152   10584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:43:57.757797   10584 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:43:57.760888   10584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:43:57.760912   10584 cni.go:84] Creating CNI manager for "bridge"
	I0923 03:43:57.760916   10584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:43:57.760954   10584 start.go:340] cluster config:
	{Name:bridge-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:43:57.764658   10584 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:43:57.772825   10584 out.go:177] * Starting "bridge-165000" primary control-plane node in "bridge-165000" cluster
	I0923 03:43:57.776807   10584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:43:57.776823   10584 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:43:57.776833   10584 cache.go:56] Caching tarball of preloaded images
	I0923 03:43:57.776926   10584 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:43:57.776932   10584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:43:57.777013   10584 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/bridge-165000/config.json ...
	I0923 03:43:57.777024   10584 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/bridge-165000/config.json: {Name:mk3cf86e5d073ca613155e8949b5293521a942c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:43:57.777243   10584 start.go:360] acquireMachinesLock for bridge-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:43:57.777278   10584 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "bridge-165000"
	I0923 03:43:57.777292   10584 start.go:93] Provisioning new machine with config: &{Name:bridge-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:43:57.777327   10584 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:43:57.784816   10584 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:43:57.802070   10584 start.go:159] libmachine.API.Create for "bridge-165000" (driver="qemu2")
	I0923 03:43:57.802107   10584 client.go:168] LocalClient.Create starting
	I0923 03:43:57.802182   10584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:43:57.802216   10584 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:57.802226   10584 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:57.802269   10584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:43:57.802292   10584 main.go:141] libmachine: Decoding PEM data...
	I0923 03:43:57.802301   10584 main.go:141] libmachine: Parsing certificate...
	I0923 03:43:57.802630   10584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:43:57.965601   10584 main.go:141] libmachine: Creating SSH key...
	I0923 03:43:58.002060   10584 main.go:141] libmachine: Creating Disk image...
	I0923 03:43:58.002066   10584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:43:58.002259   10584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:43:58.012357   10584 main.go:141] libmachine: STDOUT: 
	I0923 03:43:58.012376   10584 main.go:141] libmachine: STDERR: 
	I0923 03:43:58.012442   10584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2 +20000M
	I0923 03:43:58.020519   10584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:43:58.020533   10584 main.go:141] libmachine: STDERR: 
	I0923 03:43:58.020548   10584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:43:58.020553   10584 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:43:58.020564   10584 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:43:58.020591   10584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:43:ba:3d:f6:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:43:58.022209   10584 main.go:141] libmachine: STDOUT: 
	I0923 03:43:58.022223   10584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:43:58.022245   10584 client.go:171] duration metric: took 220.134292ms to LocalClient.Create
	I0923 03:44:00.024400   10584 start.go:128] duration metric: took 2.247091208s to createHost
	I0923 03:44:00.024476   10584 start.go:83] releasing machines lock for "bridge-165000", held for 2.247240208s
	W0923 03:44:00.024532   10584 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:00.029885   10584 out.go:177] * Deleting "bridge-165000" in qemu2 ...
	W0923 03:44:00.053992   10584 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:00.054005   10584 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:05.056149   10584 start.go:360] acquireMachinesLock for bridge-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:05.056655   10584 start.go:364] duration metric: took 367.583µs to acquireMachinesLock for "bridge-165000"
	I0923 03:44:05.056784   10584 start.go:93] Provisioning new machine with config: &{Name:bridge-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:05.057085   10584 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:05.061788   10584 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:44:05.111730   10584 start.go:159] libmachine.API.Create for "bridge-165000" (driver="qemu2")
	I0923 03:44:05.111797   10584 client.go:168] LocalClient.Create starting
	I0923 03:44:05.111908   10584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:05.111990   10584 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:05.112012   10584 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:05.112078   10584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:05.112124   10584 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:05.112137   10584 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:05.112777   10584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:05.284992   10584 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:05.348058   10584 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:05.348069   10584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:05.348255   10584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:44:05.357474   10584 main.go:141] libmachine: STDOUT: 
	I0923 03:44:05.357504   10584 main.go:141] libmachine: STDERR: 
	I0923 03:44:05.357563   10584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2 +20000M
	I0923 03:44:05.365637   10584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:05.365655   10584 main.go:141] libmachine: STDERR: 
	I0923 03:44:05.365669   10584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:44:05.365674   10584 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:05.365682   10584 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:05.365715   10584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ed:e6:28:1d:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/bridge-165000/disk.qcow2
	I0923 03:44:05.367345   10584 main.go:141] libmachine: STDOUT: 
	I0923 03:44:05.367359   10584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:05.367373   10584 client.go:171] duration metric: took 255.576583ms to LocalClient.Create
	I0923 03:44:07.369495   10584 start.go:128] duration metric: took 2.312425834s to createHost
	I0923 03:44:07.369551   10584 start.go:83] releasing machines lock for "bridge-165000", held for 2.312927375s
	W0923 03:44:07.369756   10584 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:07.384300   10584 out.go:201] 
	W0923 03:44:07.388456   10584 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:07.388476   10584 out.go:270] * 
	* 
	W0923 03:44:07.389782   10584 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:07.396279   10584 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-165000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.911133792s)

                                                
                                                
-- stdout --
	* [kubenet-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-165000" primary control-plane node in "kubenet-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:09.646756   10704 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:09.646907   10704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:09.646912   10704 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:09.646914   10704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:09.647050   10704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:09.648209   10704 out.go:352] Setting JSON to false
	I0923 03:44:09.664463   10704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6220,"bootTime":1727082029,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:09.664530   10704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:09.670637   10704 out.go:177] * [kubenet-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:09.678409   10704 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:09.678481   10704 notify.go:220] Checking for updates...
	I0923 03:44:09.684422   10704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:09.687358   10704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:09.690432   10704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:09.693456   10704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:09.696433   10704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:09.699793   10704 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:09.699859   10704 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:44:09.699916   10704 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:09.703393   10704 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:44:09.710393   10704 start.go:297] selected driver: qemu2
	I0923 03:44:09.710397   10704 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:44:09.710406   10704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:09.712653   10704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:44:09.716447   10704 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:44:09.719486   10704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:09.719504   10704 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0923 03:44:09.719534   10704 start.go:340] cluster config:
	{Name:kubenet-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:09.723198   10704 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:09.731279   10704 out.go:177] * Starting "kubenet-165000" primary control-plane node in "kubenet-165000" cluster
	I0923 03:44:09.735474   10704 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:44:09.735496   10704 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:44:09.735507   10704 cache.go:56] Caching tarball of preloaded images
	I0923 03:44:09.735578   10704 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:44:09.735591   10704 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:44:09.735644   10704 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kubenet-165000/config.json ...
	I0923 03:44:09.735660   10704 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/kubenet-165000/config.json: {Name:mk44e7d9a252eb456819420ffffea38bc9080e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:44:09.735873   10704 start.go:360] acquireMachinesLock for kubenet-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:09.735904   10704 start.go:364] duration metric: took 26µs to acquireMachinesLock for "kubenet-165000"
	I0923 03:44:09.735917   10704 start.go:93] Provisioning new machine with config: &{Name:kubenet-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:09.735944   10704 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:09.743419   10704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:44:09.760033   10704 start.go:159] libmachine.API.Create for "kubenet-165000" (driver="qemu2")
	I0923 03:44:09.760061   10704 client.go:168] LocalClient.Create starting
	I0923 03:44:09.760132   10704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:09.760163   10704 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:09.760173   10704 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:09.760213   10704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:09.760243   10704 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:09.760251   10704 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:09.760588   10704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:09.924833   10704 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:10.075022   10704 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:10.075030   10704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:10.075244   10704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:10.085558   10704 main.go:141] libmachine: STDOUT: 
	I0923 03:44:10.085582   10704 main.go:141] libmachine: STDERR: 
	I0923 03:44:10.085664   10704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2 +20000M
	I0923 03:44:10.094321   10704 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:10.094338   10704 main.go:141] libmachine: STDERR: 
	I0923 03:44:10.094358   10704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:10.094364   10704 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:10.094376   10704 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:10.094408   10704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:05:ed:38:35:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:10.096234   10704 main.go:141] libmachine: STDOUT: 
	I0923 03:44:10.096250   10704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:10.096268   10704 client.go:171] duration metric: took 336.208792ms to LocalClient.Create
	I0923 03:44:12.098408   10704 start.go:128] duration metric: took 2.3624955s to createHost
	I0923 03:44:12.098482   10704 start.go:83] releasing machines lock for "kubenet-165000", held for 2.362621875s
	W0923 03:44:12.098530   10704 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:12.108053   10704 out.go:177] * Deleting "kubenet-165000" in qemu2 ...
	W0923 03:44:12.139841   10704 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:12.139861   10704 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:17.141857   10704 start.go:360] acquireMachinesLock for kubenet-165000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:17.141991   10704 start.go:364] duration metric: took 103.875µs to acquireMachinesLock for "kubenet-165000"
	I0923 03:44:17.142008   10704 start.go:93] Provisioning new machine with config: &{Name:kubenet-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:17.142044   10704 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:17.153218   10704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 03:44:17.169190   10704 start.go:159] libmachine.API.Create for "kubenet-165000" (driver="qemu2")
	I0923 03:44:17.169224   10704 client.go:168] LocalClient.Create starting
	I0923 03:44:17.169304   10704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:17.169341   10704 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:17.169350   10704 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:17.169383   10704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:17.169405   10704 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:17.169412   10704 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:17.169709   10704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:17.334633   10704 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:17.470081   10704 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:17.470090   10704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:17.470292   10704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:17.479942   10704 main.go:141] libmachine: STDOUT: 
	I0923 03:44:17.479965   10704 main.go:141] libmachine: STDERR: 
	I0923 03:44:17.480025   10704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2 +20000M
	I0923 03:44:17.488041   10704 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:17.488060   10704 main.go:141] libmachine: STDERR: 
	I0923 03:44:17.488071   10704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:17.488077   10704 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:17.488087   10704 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:17.488114   10704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:00:8e:49:89:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/kubenet-165000/disk.qcow2
	I0923 03:44:17.489769   10704 main.go:141] libmachine: STDOUT: 
	I0923 03:44:17.489790   10704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:17.489803   10704 client.go:171] duration metric: took 320.583291ms to LocalClient.Create
	I0923 03:44:19.491870   10704 start.go:128] duration metric: took 2.349860542s to createHost
	I0923 03:44:19.491909   10704 start.go:83] releasing machines lock for "kubenet-165000", held for 2.349963917s
	W0923 03:44:19.492034   10704 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:19.501402   10704 out.go:201] 
	W0923 03:44:19.508390   10704 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:19.508406   10704 out.go:270] * 
	* 
	W0923 03:44:19.509202   10704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:19.521343   10704 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.821939875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-937000" primary control-plane node in "old-k8s-version-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:21.696386   10820 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:21.696510   10820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:21.696514   10820 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:21.696517   10820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:21.696648   10820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:21.697897   10820 out.go:352] Setting JSON to false
	I0923 03:44:21.714168   10820 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6232,"bootTime":1727082029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:21.714245   10820 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:21.721811   10820 out.go:177] * [old-k8s-version-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:21.728992   10820 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:21.729035   10820 notify.go:220] Checking for updates...
	I0923 03:44:21.734941   10820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:21.737902   10820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:21.740899   10820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:21.743933   10820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:21.746955   10820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:21.748684   10820 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:21.748753   10820 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:44:21.748797   10820 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:21.752899   10820 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:44:21.759772   10820 start.go:297] selected driver: qemu2
	I0923 03:44:21.759778   10820 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:44:21.759784   10820 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:21.762054   10820 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:44:21.765918   10820 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:44:21.769037   10820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:21.769053   10820 cni.go:84] Creating CNI manager for ""
	I0923 03:44:21.769077   10820 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 03:44:21.769103   10820 start.go:340] cluster config:
	{Name:old-k8s-version-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:21.772799   10820 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:21.779905   10820 out.go:177] * Starting "old-k8s-version-937000" primary control-plane node in "old-k8s-version-937000" cluster
	I0923 03:44:21.783939   10820 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:44:21.783957   10820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:44:21.783971   10820 cache.go:56] Caching tarball of preloaded images
	I0923 03:44:21.784065   10820 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:44:21.784070   10820 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 03:44:21.784131   10820 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/old-k8s-version-937000/config.json ...
	I0923 03:44:21.784141   10820 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/old-k8s-version-937000/config.json: {Name:mkb209f1ebf54615af98af673af3324e1cb3e8cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:44:21.784410   10820 start.go:360] acquireMachinesLock for old-k8s-version-937000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:21.784444   10820 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "old-k8s-version-937000"
	I0923 03:44:21.784456   10820 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:21.784495   10820 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:21.791959   10820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:44:21.808848   10820 start.go:159] libmachine.API.Create for "old-k8s-version-937000" (driver="qemu2")
	I0923 03:44:21.808882   10820 client.go:168] LocalClient.Create starting
	I0923 03:44:21.808938   10820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:21.808978   10820 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:21.809005   10820 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:21.809046   10820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:21.809069   10820 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:21.809076   10820 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:21.809429   10820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:21.973401   10820 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:22.041726   10820 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:22.041733   10820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:22.041931   10820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:22.051045   10820 main.go:141] libmachine: STDOUT: 
	I0923 03:44:22.051064   10820 main.go:141] libmachine: STDERR: 
	I0923 03:44:22.051126   10820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2 +20000M
	I0923 03:44:22.058899   10820 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:22.058914   10820 main.go:141] libmachine: STDERR: 
	I0923 03:44:22.058928   10820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:22.058933   10820 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:22.058946   10820 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:22.058984   10820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:81:b1:bf:d9:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:22.060603   10820 main.go:141] libmachine: STDOUT: 
	I0923 03:44:22.060616   10820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:22.060635   10820 client.go:171] duration metric: took 251.752917ms to LocalClient.Create
	I0923 03:44:24.062808   10820 start.go:128] duration metric: took 2.278329791s to createHost
	I0923 03:44:24.062880   10820 start.go:83] releasing machines lock for "old-k8s-version-937000", held for 2.278477083s
	W0923 03:44:24.062973   10820 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:24.070199   10820 out.go:177] * Deleting "old-k8s-version-937000" in qemu2 ...
	W0923 03:44:24.102454   10820 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:24.102484   10820 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:29.103445   10820 start.go:360] acquireMachinesLock for old-k8s-version-937000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:29.103647   10820 start.go:364] duration metric: took 158.542µs to acquireMachinesLock for "old-k8s-version-937000"
	I0923 03:44:29.103697   10820 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:29.103796   10820 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:29.114095   10820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:44:29.136954   10820 start.go:159] libmachine.API.Create for "old-k8s-version-937000" (driver="qemu2")
	I0923 03:44:29.136990   10820 client.go:168] LocalClient.Create starting
	I0923 03:44:29.137066   10820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:29.137113   10820 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:29.137124   10820 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:29.137166   10820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:29.137194   10820 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:29.137202   10820 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:29.137751   10820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:29.303866   10820 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:29.431145   10820 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:29.431157   10820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:29.431395   10820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:29.440988   10820 main.go:141] libmachine: STDOUT: 
	I0923 03:44:29.441019   10820 main.go:141] libmachine: STDERR: 
	I0923 03:44:29.441088   10820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2 +20000M
	I0923 03:44:29.449095   10820 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:29.449110   10820 main.go:141] libmachine: STDERR: 
	I0923 03:44:29.449120   10820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:29.449132   10820 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:29.449138   10820 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:29.449168   10820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:77:b1:73:30:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:29.450836   10820 main.go:141] libmachine: STDOUT: 
	I0923 03:44:29.450850   10820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:29.450864   10820 client.go:171] duration metric: took 313.87675ms to LocalClient.Create
	I0923 03:44:31.453005   10820 start.go:128] duration metric: took 2.349234291s to createHost
	I0923 03:44:31.453076   10820 start.go:83] releasing machines lock for "old-k8s-version-937000", held for 2.349468875s
	W0923 03:44:31.453390   10820 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:31.462897   10820 out.go:201] 
	W0923 03:44:31.469003   10820 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:31.469040   10820 out.go:270] * 
	* 
	W0923 03:44:31.471321   10820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:31.481051   10820 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (54.572083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-937000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-937000 create -f testdata/busybox.yaml: exit status 1 (29.053917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-937000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-937000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (30.729375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (30.260167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-937000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-937000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-937000 describe deploy/metrics-server -n kube-system: exit status 1 (27.676333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-937000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-937000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (29.694584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185601625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-937000" primary control-plane node in "old-k8s-version-937000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:33.955214   10875 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:33.955361   10875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:33.955366   10875 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:33.955369   10875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:33.955516   10875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:33.956738   10875 out.go:352] Setting JSON to false
	I0923 03:44:33.973829   10875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6244,"bootTime":1727082029,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:33.973900   10875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:33.978791   10875 out.go:177] * [old-k8s-version-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:33.985809   10875 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:33.985847   10875 notify.go:220] Checking for updates...
	I0923 03:44:33.993708   10875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:33.997693   10875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:34.000787   10875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:34.003778   10875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:34.006759   10875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:34.010037   10875 config.go:182] Loaded profile config "old-k8s-version-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 03:44:34.012684   10875 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 03:44:34.015776   10875 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:34.018730   10875 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:44:34.025748   10875 start.go:297] selected driver: qemu2
	I0923 03:44:34.025754   10875 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:34.025813   10875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:34.028243   10875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:34.028268   10875 cni.go:84] Creating CNI manager for ""
	I0923 03:44:34.028287   10875 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 03:44:34.028310   10875 start.go:340] cluster config:
	{Name:old-k8s-version-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-937000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:34.031673   10875 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:34.038765   10875 out.go:177] * Starting "old-k8s-version-937000" primary control-plane node in "old-k8s-version-937000" cluster
	I0923 03:44:34.042815   10875 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:44:34.042834   10875 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:44:34.042843   10875 cache.go:56] Caching tarball of preloaded images
	I0923 03:44:34.042906   10875 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:44:34.042911   10875 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 03:44:34.042964   10875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/old-k8s-version-937000/config.json ...
	I0923 03:44:34.043441   10875 start.go:360] acquireMachinesLock for old-k8s-version-937000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:34.043474   10875 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "old-k8s-version-937000"
	I0923 03:44:34.043486   10875 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:44:34.043489   10875 fix.go:54] fixHost starting: 
	I0923 03:44:34.043599   10875 fix.go:112] recreateIfNeeded on old-k8s-version-937000: state=Stopped err=<nil>
	W0923 03:44:34.043607   10875 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:44:34.047734   10875 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-937000" ...
	I0923 03:44:34.055644   10875 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:34.055672   10875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:77:b1:73:30:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:34.057556   10875 main.go:141] libmachine: STDOUT: 
	I0923 03:44:34.057572   10875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:34.057601   10875 fix.go:56] duration metric: took 14.109292ms for fixHost
	I0923 03:44:34.057605   10875 start.go:83] releasing machines lock for "old-k8s-version-937000", held for 14.12725ms
	W0923 03:44:34.057611   10875 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:34.057643   10875 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:34.057647   10875 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:39.057849   10875 start.go:360] acquireMachinesLock for old-k8s-version-937000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:39.058346   10875 start.go:364] duration metric: took 374.417µs to acquireMachinesLock for "old-k8s-version-937000"
	I0923 03:44:39.058496   10875 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:44:39.058517   10875 fix.go:54] fixHost starting: 
	I0923 03:44:39.059242   10875 fix.go:112] recreateIfNeeded on old-k8s-version-937000: state=Stopped err=<nil>
	W0923 03:44:39.059296   10875 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:44:39.064301   10875 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-937000" ...
	I0923 03:44:39.071019   10875 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:39.071296   10875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:77:b1:73:30:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/old-k8s-version-937000/disk.qcow2
	I0923 03:44:39.081250   10875 main.go:141] libmachine: STDOUT: 
	I0923 03:44:39.081319   10875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:39.081419   10875 fix.go:56] duration metric: took 22.901375ms for fixHost
	I0923 03:44:39.081441   10875 start.go:83] releasing machines lock for "old-k8s-version-937000", held for 23.0705ms
	W0923 03:44:39.081644   10875 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:39.088072   10875 out.go:201] 
	W0923 03:44:39.092119   10875 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:39.092153   10875 out.go:270] * 
	* 
	W0923 03:44:39.093483   10875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:39.104022   10875 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-937000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (44.77675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-937000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (30.26775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-937000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-937000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-937000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.878083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-937000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-937000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (29.724375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-937000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (29.44725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-937000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-937000 --alsologtostderr -v=1: exit status 83 (40.203542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-937000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-937000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:39.344945   10896 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:39.345788   10896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:39.345791   10896 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:39.345794   10896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:39.345924   10896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:39.346122   10896 out.go:352] Setting JSON to false
	I0923 03:44:39.346130   10896 mustload.go:65] Loading cluster: old-k8s-version-937000
	I0923 03:44:39.346348   10896 config.go:182] Loaded profile config "old-k8s-version-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 03:44:39.349279   10896 out.go:177] * The control-plane node old-k8s-version-937000 host is not running: state=Stopped
	I0923 03:44:39.353092   10896 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-937000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-937000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (29.018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (30.454875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.843055375s)

                                                
                                                
-- stdout --
	* [no-preload-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-984000" primary control-plane node in "no-preload-984000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-984000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:39.663343   10913 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:39.663484   10913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:39.663487   10913 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:39.663489   10913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:39.663629   10913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:39.664672   10913 out.go:352] Setting JSON to false
	I0923 03:44:39.681509   10913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6250,"bootTime":1727082029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:39.681631   10913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:39.686123   10913 out.go:177] * [no-preload-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:39.692114   10913 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:39.692141   10913 notify.go:220] Checking for updates...
	I0923 03:44:39.698092   10913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:39.701302   10913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:39.704094   10913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:39.707080   10913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:39.710030   10913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:39.713434   10913 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:39.713492   10913 config.go:182] Loaded profile config "stopped-upgrade-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 03:44:39.713543   10913 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:39.717078   10913 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:44:39.724101   10913 start.go:297] selected driver: qemu2
	I0923 03:44:39.724107   10913 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:44:39.724113   10913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:39.726523   10913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:44:39.730101   10913 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:44:39.733131   10913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:39.733149   10913 cni.go:84] Creating CNI manager for ""
	I0923 03:44:39.733170   10913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:44:39.733173   10913 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:44:39.733205   10913 start.go:340] cluster config:
	{Name:no-preload-984000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:39.736681   10913 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.744879   10913 out.go:177] * Starting "no-preload-984000" primary control-plane node in "no-preload-984000" cluster
	I0923 03:44:39.749059   10913 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:44:39.749126   10913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/no-preload-984000/config.json ...
	I0923 03:44:39.749142   10913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/no-preload-984000/config.json: {Name:mk976b7939b55ced6b7c1d6ebbc849036183cf25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:44:39.749184   10913 cache.go:107] acquiring lock: {Name:mk9b40db5f4a4860de51bf9554609818322b049e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749183   10913 cache.go:107] acquiring lock: {Name:mk6fa1f30ce1d8e1e703a89225305186cb7244d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749191   10913 cache.go:107] acquiring lock: {Name:mka46662cfade54d433820e30ada98ceac0cb8d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749243   10913 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 03:44:39.749255   10913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.292µs
	I0923 03:44:39.749260   10913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 03:44:39.749265   10913 cache.go:107] acquiring lock: {Name:mk8aa2d1678bdf19e03866e142ead57e1b720341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749338   10913 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 03:44:39.749362   10913 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 03:44:39.749400   10913 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 03:44:39.749421   10913 start.go:360] acquireMachinesLock for no-preload-984000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:39.749393   10913 cache.go:107] acquiring lock: {Name:mk362b76218fe69480e563d6d09a3bee7c98fdc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749431   10913 cache.go:107] acquiring lock: {Name:mk21ceb4e86e923ae7a3f6e76cc615a331d8e178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749479   10913 cache.go:107] acquiring lock: {Name:mkbc10600249d2069dfaf9d8ff0fcb66d52cd048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749484   10913 cache.go:107] acquiring lock: {Name:mk0b2d76d6cb80ee971147dabd9084c52a034fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:39.749508   10913 start.go:364] duration metric: took 81.334µs to acquireMachinesLock for "no-preload-984000"
	I0923 03:44:39.749527   10913 start.go:93] Provisioning new machine with config: &{Name:no-preload-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:39.749562   10913 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 03:44:39.749563   10913 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:39.749589   10913 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0923 03:44:39.750036   10913 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0923 03:44:39.754541   10913 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 03:44:39.758082   10913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:44:39.762841   10913 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 03:44:39.762987   10913 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0923 03:44:39.765193   10913 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 03:44:39.765291   10913 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 03:44:39.765424   10913 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 03:44:39.766170   10913 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0923 03:44:39.766226   10913 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 03:44:39.774438   10913 start.go:159] libmachine.API.Create for "no-preload-984000" (driver="qemu2")
	I0923 03:44:39.774459   10913 client.go:168] LocalClient.Create starting
	I0923 03:44:39.774538   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:39.774569   10913 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:39.774578   10913 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:39.774624   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:39.774648   10913 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:39.774660   10913 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:39.775051   10913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:39.948527   10913 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:40.074636   10913 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:40.074662   10913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:40.074925   10913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:40.085015   10913 main.go:141] libmachine: STDOUT: 
	I0923 03:44:40.085036   10913 main.go:141] libmachine: STDERR: 
	I0923 03:44:40.085098   10913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2 +20000M
	I0923 03:44:40.094030   10913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:40.094052   10913 main.go:141] libmachine: STDERR: 
	I0923 03:44:40.094074   10913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:40.094078   10913 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:40.094091   10913 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:40.094117   10913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:47:51:08:50:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:40.096051   10913 main.go:141] libmachine: STDOUT: 
	I0923 03:44:40.096080   10913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:40.096105   10913 client.go:171] duration metric: took 321.647333ms to LocalClient.Create
	I0923 03:44:40.150258   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0923 03:44:40.174935   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0923 03:44:40.179579   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0923 03:44:40.183310   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0923 03:44:40.193396   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0923 03:44:40.207083   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0923 03:44:40.242226   10913 cache.go:162] opening:  /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0923 03:44:40.299386   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 03:44:40.299406   10913 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 549.953916ms
	I0923 03:44:40.299432   10913 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 03:44:42.096216   10913 start.go:128] duration metric: took 2.346696208s to createHost
	I0923 03:44:42.096226   10913 start.go:83] releasing machines lock for "no-preload-984000", held for 2.346759833s
	W0923 03:44:42.096239   10913 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:42.102458   10913 out.go:177] * Deleting "no-preload-984000" in qemu2 ...
	W0923 03:44:42.113831   10913 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:42.113841   10913 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:43.853785   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 03:44:43.853803   10913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.104708833s
	I0923 03:44:43.853810   10913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 03:44:44.162216   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 03:44:44.162230   10913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.413158167s
	I0923 03:44:44.162237   10913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 03:44:44.201027   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 03:44:44.201035   10913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.451719458s
	I0923 03:44:44.201047   10913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 03:44:44.526285   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 03:44:44.526315   10913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.777012875s
	I0923 03:44:44.526325   10913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 03:44:44.566930   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 03:44:44.566944   10913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.817784375s
	I0923 03:44:44.566955   10913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 03:44:47.113875   10913 start.go:360] acquireMachinesLock for no-preload-984000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:47.114044   10913 start.go:364] duration metric: took 142.209µs to acquireMachinesLock for "no-preload-984000"
	I0923 03:44:47.114095   10913 start.go:93] Provisioning new machine with config: &{Name:no-preload-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:47.114148   10913 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:47.127974   10913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:44:47.148391   10913 start.go:159] libmachine.API.Create for "no-preload-984000" (driver="qemu2")
	I0923 03:44:47.148440   10913 client.go:168] LocalClient.Create starting
	I0923 03:44:47.148505   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:47.148545   10913 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:47.148557   10913 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:47.148599   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:47.148625   10913 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:47.148636   10913 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:47.148961   10913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:47.331318   10913 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:47.409150   10913 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:47.409157   10913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:47.409380   10913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:47.419479   10913 main.go:141] libmachine: STDOUT: 
	I0923 03:44:47.419511   10913 main.go:141] libmachine: STDERR: 
	I0923 03:44:47.419583   10913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2 +20000M
	I0923 03:44:47.428093   10913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:47.428111   10913 main.go:141] libmachine: STDERR: 
	I0923 03:44:47.428127   10913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:47.428130   10913 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:47.428149   10913 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:47.428190   10913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:e4:9d:ba:7e:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:47.429991   10913 main.go:141] libmachine: STDOUT: 
	I0923 03:44:47.430006   10913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:47.430019   10913 client.go:171] duration metric: took 281.581083ms to LocalClient.Create
	I0923 03:44:48.569977   10913 cache.go:157] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 03:44:48.570040   10913 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.820873125s
	I0923 03:44:48.570066   10913 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 03:44:48.570119   10913 cache.go:87] Successfully saved all images to host disk.
	I0923 03:44:49.432118   10913 start.go:128] duration metric: took 2.317976542s to createHost
	I0923 03:44:49.432152   10913 start.go:83] releasing machines lock for "no-preload-984000", held for 2.318147541s
	W0923 03:44:49.432347   10913 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:49.450775   10913 out.go:201] 
	W0923 03:44:49.454697   10913 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:49.454708   10913 out.go:270] * 
	* 
	W0923 03:44:49.455601   10913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:49.466718   10913 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (34.79525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-984000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-984000 create -f testdata/busybox.yaml: exit status 1 (27.906292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-984000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-984000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (30.563125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (30.067209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-984000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-984000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-984000 describe deploy/metrics-server -n kube-system: exit status 1 (26.888333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-984000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-984000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (31.245292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.169465375s)

                                                
                                                
-- stdout --
	* [no-preload-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-984000" primary control-plane node in "no-preload-984000" cluster
	* Restarting existing qemu2 VM for "no-preload-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:52.548117   11010 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:52.548284   11010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:52.548287   11010 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:52.548290   11010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:52.548428   11010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:52.549457   11010 out.go:352] Setting JSON to false
	I0923 03:44:52.565825   11010 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6263,"bootTime":1727082029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:52.565893   11010 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:52.570990   11010 out.go:177] * [no-preload-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:52.577989   11010 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:52.578073   11010 notify.go:220] Checking for updates...
	I0923 03:44:52.585934   11010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:52.588941   11010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:52.591937   11010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:52.594954   11010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:52.596421   11010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:52.600175   11010 config.go:182] Loaded profile config "no-preload-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:52.600514   11010 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:52.604961   11010 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:44:52.609933   11010 start.go:297] selected driver: qemu2
	I0923 03:44:52.609938   11010 start.go:901] validating driver "qemu2" against &{Name:no-preload-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:52.609983   11010 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:52.612293   11010 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:52.612313   11010 cni.go:84] Creating CNI manager for ""
	I0923 03:44:52.612333   11010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:44:52.612351   11010 start.go:340] cluster config:
	{Name:no-preload-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-984000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:52.615965   11010 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.624940   11010 out.go:177] * Starting "no-preload-984000" primary control-plane node in "no-preload-984000" cluster
	I0923 03:44:52.628959   11010 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:44:52.629034   11010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/no-preload-984000/config.json ...
	I0923 03:44:52.629049   11010 cache.go:107] acquiring lock: {Name:mk6fa1f30ce1d8e1e703a89225305186cb7244d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629082   11010 cache.go:107] acquiring lock: {Name:mka46662cfade54d433820e30ada98ceac0cb8d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629113   11010 cache.go:107] acquiring lock: {Name:mk21ceb4e86e923ae7a3f6e76cc615a331d8e178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629062   11010 cache.go:107] acquiring lock: {Name:mk9b40db5f4a4860de51bf9554609818322b049e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629144   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 03:44:52.629152   11010 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 69.917µs
	I0923 03:44:52.629158   11010 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 03:44:52.629164   11010 cache.go:107] acquiring lock: {Name:mkbc10600249d2069dfaf9d8ff0fcb66d52cd048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629144   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 03:44:52.629177   11010 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 135.667µs
	I0923 03:44:52.629181   11010 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 03:44:52.629187   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 03:44:52.629192   11010 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 114.292µs
	I0923 03:44:52.629196   11010 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 03:44:52.629152   11010 cache.go:107] acquiring lock: {Name:mk0b2d76d6cb80ee971147dabd9084c52a034fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629205   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 03:44:52.629213   11010 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 50.166µs
	I0923 03:44:52.629220   11010 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 03:44:52.629219   11010 cache.go:107] acquiring lock: {Name:mk8aa2d1678bdf19e03866e142ead57e1b720341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629266   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 03:44:52.629270   11010 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 118.625µs
	I0923 03:44:52.629273   11010 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 03:44:52.629271   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 03:44:52.629277   11010 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 72.75µs
	I0923 03:44:52.629280   11010 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 03:44:52.629274   11010 cache.go:107] acquiring lock: {Name:mk362b76218fe69480e563d6d09a3bee7c98fdc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:52.629298   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 03:44:52.629303   11010 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 242.958µs
	I0923 03:44:52.629310   11010 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 03:44:52.629323   11010 cache.go:115] /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 03:44:52.629326   11010 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 114.041µs
	I0923 03:44:52.629333   11010 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 03:44:52.629336   11010 cache.go:87] Successfully saved all images to host disk.
	I0923 03:44:52.629458   11010 start.go:360] acquireMachinesLock for no-preload-984000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:52.629492   11010 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "no-preload-984000"
	I0923 03:44:52.629503   11010 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:44:52.629507   11010 fix.go:54] fixHost starting: 
	I0923 03:44:52.629615   11010 fix.go:112] recreateIfNeeded on no-preload-984000: state=Stopped err=<nil>
	W0923 03:44:52.629629   11010 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:44:52.636950   11010 out.go:177] * Restarting existing qemu2 VM for "no-preload-984000" ...
	I0923 03:44:52.640984   11010 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:52.641024   11010 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:e4:9d:ba:7e:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:52.642953   11010 main.go:141] libmachine: STDOUT: 
	I0923 03:44:52.642971   11010 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:52.642998   11010 fix.go:56] duration metric: took 13.490125ms for fixHost
	I0923 03:44:52.643003   11010 start.go:83] releasing machines lock for "no-preload-984000", held for 13.508167ms
	W0923 03:44:52.643009   11010 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:52.643039   11010 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:52.643043   11010 start.go:729] Will try again in 5 seconds ...
	I0923 03:44:57.645104   11010 start.go:360] acquireMachinesLock for no-preload-984000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:57.645273   11010 start.go:364] duration metric: took 124.167µs to acquireMachinesLock for "no-preload-984000"
	I0923 03:44:57.645297   11010 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:44:57.645315   11010 fix.go:54] fixHost starting: 
	I0923 03:44:57.645604   11010 fix.go:112] recreateIfNeeded on no-preload-984000: state=Stopped err=<nil>
	W0923 03:44:57.645616   11010 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:44:57.650157   11010 out.go:177] * Restarting existing qemu2 VM for "no-preload-984000" ...
	I0923 03:44:57.658046   11010 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:57.658147   11010 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:e4:9d:ba:7e:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/no-preload-984000/disk.qcow2
	I0923 03:44:57.662028   11010 main.go:141] libmachine: STDOUT: 
	I0923 03:44:57.662054   11010 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:57.662086   11010 fix.go:56] duration metric: took 16.770958ms for fixHost
	I0923 03:44:57.662094   11010 start.go:83] releasing machines lock for "no-preload-984000", held for 16.81175ms
	W0923 03:44:57.662153   11010 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:44:57.663647   11010 out.go:201] 
	W0923 03:44:57.668062   11010 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:44:57.668076   11010 out.go:270] * 
	* 
	W0923 03:44:57.668929   11010 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:44:57.680083   11010 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-984000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (34.18075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-984000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (49.532667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-984000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-984000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-984000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.591875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-984000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-984000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (33.601333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.07530825s)

                                                
                                                
-- stdout --
	* [embed-certs-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-574000" primary control-plane node in "embed-certs-574000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-574000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:57.835038   11032 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:57.835179   11032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:57.835183   11032 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:57.835186   11032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:57.835337   11032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:57.836401   11032 out.go:352] Setting JSON to false
	I0923 03:44:57.854369   11032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6268,"bootTime":1727082029,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:57.854451   11032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:57.858923   11032 out.go:177] * [embed-certs-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:57.867214   11032 notify.go:220] Checking for updates...
	I0923 03:44:57.871022   11032 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:57.877899   11032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:57.885065   11032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:57.892018   11032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:57.896599   11032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:57.904015   11032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:57.905990   11032 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:57.906052   11032 config.go:182] Loaded profile config "no-preload-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:57.906108   11032 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:57.914101   11032 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:44:57.920990   11032 start.go:297] selected driver: qemu2
	I0923 03:44:57.920998   11032 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:44:57.921004   11032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:57.923445   11032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:44:57.927015   11032 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:44:57.930188   11032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:57.930219   11032 cni.go:84] Creating CNI manager for ""
	I0923 03:44:57.930242   11032 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:44:57.930248   11032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:44:57.930280   11032 start.go:340] cluster config:
	{Name:embed-certs-574000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:57.934649   11032 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:57.939011   11032 out.go:177] * Starting "embed-certs-574000" primary control-plane node in "embed-certs-574000" cluster
	I0923 03:44:57.945024   11032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:44:57.945054   11032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:44:57.945061   11032 cache.go:56] Caching tarball of preloaded images
	I0923 03:44:57.945159   11032 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:44:57.945165   11032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:44:57.945234   11032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/embed-certs-574000/config.json ...
	I0923 03:44:57.945244   11032 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/embed-certs-574000/config.json: {Name:mkdb59681b8222d3c4f2bbbd8a465cd6cb927e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:44:57.945455   11032 start.go:360] acquireMachinesLock for embed-certs-574000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:44:57.945486   11032 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "embed-certs-574000"
	I0923 03:44:57.945497   11032 start.go:93] Provisioning new machine with config: &{Name:embed-certs-574000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:44:57.945528   11032 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:44:57.950063   11032 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:44:57.966989   11032 start.go:159] libmachine.API.Create for "embed-certs-574000" (driver="qemu2")
	I0923 03:44:57.967020   11032 client.go:168] LocalClient.Create starting
	I0923 03:44:57.967087   11032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:44:57.967128   11032 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:57.967139   11032 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:57.967179   11032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:44:57.967203   11032 main.go:141] libmachine: Decoding PEM data...
	I0923 03:44:57.967212   11032 main.go:141] libmachine: Parsing certificate...
	I0923 03:44:57.967613   11032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:44:58.223721   11032 main.go:141] libmachine: Creating SSH key...
	I0923 03:44:58.294376   11032 main.go:141] libmachine: Creating Disk image...
	I0923 03:44:58.294388   11032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:44:58.294579   11032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:44:58.304229   11032 main.go:141] libmachine: STDOUT: 
	I0923 03:44:58.304253   11032 main.go:141] libmachine: STDERR: 
	I0923 03:44:58.304330   11032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2 +20000M
	I0923 03:44:58.316628   11032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:44:58.316645   11032 main.go:141] libmachine: STDERR: 
	I0923 03:44:58.316660   11032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:44:58.316664   11032 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:44:58.316681   11032 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:44:58.316709   11032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:27:d7:59:bc:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:44:58.318354   11032 main.go:141] libmachine: STDOUT: 
	I0923 03:44:58.318369   11032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:44:58.318391   11032 client.go:171] duration metric: took 351.372375ms to LocalClient.Create
	I0923 03:45:00.320565   11032 start.go:128] duration metric: took 2.375060542s to createHost
	I0923 03:45:00.320643   11032 start.go:83] releasing machines lock for "embed-certs-574000", held for 2.375198083s
	W0923 03:45:00.320764   11032 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:00.340915   11032 out.go:177] * Deleting "embed-certs-574000" in qemu2 ...
	W0923 03:45:00.369797   11032 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:00.369823   11032 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:05.371872   11032 start.go:360] acquireMachinesLock for embed-certs-574000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:05.372351   11032 start.go:364] duration metric: took 395.5µs to acquireMachinesLock for "embed-certs-574000"
	I0923 03:45:05.372470   11032 start.go:93] Provisioning new machine with config: &{Name:embed-certs-574000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:45:05.372787   11032 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:45:05.375839   11032 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:45:05.425937   11032 start.go:159] libmachine.API.Create for "embed-certs-574000" (driver="qemu2")
	I0923 03:45:05.425992   11032 client.go:168] LocalClient.Create starting
	I0923 03:45:05.426120   11032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:45:05.426192   11032 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:05.426207   11032 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:05.426269   11032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:45:05.426312   11032 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:05.426327   11032 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:05.426854   11032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:45:05.603641   11032 main.go:141] libmachine: Creating SSH key...
	I0923 03:45:05.805904   11032 main.go:141] libmachine: Creating Disk image...
	I0923 03:45:05.805910   11032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:45:05.806158   11032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:45:05.815935   11032 main.go:141] libmachine: STDOUT: 
	I0923 03:45:05.815953   11032 main.go:141] libmachine: STDERR: 
	I0923 03:45:05.816021   11032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2 +20000M
	I0923 03:45:05.823922   11032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:45:05.823938   11032 main.go:141] libmachine: STDERR: 
	I0923 03:45:05.823950   11032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:45:05.823954   11032 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:45:05.823963   11032 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:05.823994   11032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:95:36:fa:d4:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:45:05.825652   11032 main.go:141] libmachine: STDOUT: 
	I0923 03:45:05.825667   11032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:05.825680   11032 client.go:171] duration metric: took 399.6915ms to LocalClient.Create
	I0923 03:45:07.827191   11032 start.go:128] duration metric: took 2.454429125s to createHost
	I0923 03:45:07.827242   11032 start.go:83] releasing machines lock for "embed-certs-574000", held for 2.454922042s
	W0923 03:45:07.827530   11032 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-574000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-574000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:07.847301   11032 out.go:201] 
	W0923 03:45:07.855306   11032 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:07.855340   11032 out.go:270] * 
	* 
	W0923 03:45:07.857197   11032 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:07.867097   11032 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (50.737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-984000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (33.946292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-984000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-984000 --alsologtostderr -v=1: exit status 83 (55.725541ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-984000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:57.973640   11044 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:57.973788   11044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:57.973792   11044 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:57.973794   11044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:57.973938   11044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:57.974161   11044 out.go:352] Setting JSON to false
	I0923 03:44:57.974170   11044 mustload.go:65] Loading cluster: no-preload-984000
	I0923 03:44:57.974395   11044 config.go:182] Loaded profile config "no-preload-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:57.989200   11044 out.go:177] * The control-plane node no-preload-984000 host is not running: state=Stopped
	I0923 03:44:57.997035   11044 out.go:177]   To start a cluster, run: "minikube start -p no-preload-984000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-984000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (33.243083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (34.162125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.744587875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-152000" primary control-plane node in "default-k8s-diff-port-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:44:58.517190   11072 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:44:58.517309   11072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:58.517312   11072 out.go:358] Setting ErrFile to fd 2...
	I0923 03:44:58.517315   11072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:44:58.517445   11072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:44:58.518529   11072 out.go:352] Setting JSON to false
	I0923 03:44:58.534794   11072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6269,"bootTime":1727082029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:44:58.534861   11072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:44:58.540101   11072 out.go:177] * [default-k8s-diff-port-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:44:58.544804   11072 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:44:58.544903   11072 notify.go:220] Checking for updates...
	I0923 03:44:58.553059   11072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:44:58.557020   11072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:44:58.560047   11072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:44:58.563063   11072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:44:58.566055   11072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:44:58.569411   11072 config.go:182] Loaded profile config "embed-certs-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:58.569469   11072 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:44:58.569513   11072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:44:58.573012   11072 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:44:58.580055   11072 start.go:297] selected driver: qemu2
	I0923 03:44:58.580061   11072 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:44:58.580068   11072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:44:58.582348   11072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:44:58.586018   11072 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:44:58.590110   11072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:44:58.590132   11072 cni.go:84] Creating CNI manager for ""
	I0923 03:44:58.590163   11072 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:44:58.590174   11072 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:44:58.590202   11072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:44:58.593873   11072 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:44:58.601931   11072 out.go:177] * Starting "default-k8s-diff-port-152000" primary control-plane node in "default-k8s-diff-port-152000" cluster
	I0923 03:44:58.606046   11072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:44:58.606062   11072 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:44:58.606071   11072 cache.go:56] Caching tarball of preloaded images
	I0923 03:44:58.606137   11072 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:44:58.606143   11072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:44:58.606212   11072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/default-k8s-diff-port-152000/config.json ...
	I0923 03:44:58.606227   11072 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/default-k8s-diff-port-152000/config.json: {Name:mk0861a02a0144cb0a4fdf8ecdb82309825957b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:44:58.606455   11072 start.go:360] acquireMachinesLock for default-k8s-diff-port-152000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:00.320803   11072 start.go:364] duration metric: took 1.714357458s to acquireMachinesLock for "default-k8s-diff-port-152000"
	I0923 03:45:00.320985   11072 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:45:00.321173   11072 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:45:00.330965   11072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:45:00.383730   11072 start.go:159] libmachine.API.Create for "default-k8s-diff-port-152000" (driver="qemu2")
	I0923 03:45:00.383792   11072 client.go:168] LocalClient.Create starting
	I0923 03:45:00.383928   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:45:00.383985   11072 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:00.384004   11072 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:00.384068   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:45:00.384113   11072 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:00.384125   11072 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:00.384769   11072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:45:00.565024   11072 main.go:141] libmachine: Creating SSH key...
	I0923 03:45:00.596354   11072 main.go:141] libmachine: Creating Disk image...
	I0923 03:45:00.596360   11072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:45:00.596575   11072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:00.605723   11072 main.go:141] libmachine: STDOUT: 
	I0923 03:45:00.605738   11072 main.go:141] libmachine: STDERR: 
	I0923 03:45:00.605798   11072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2 +20000M
	I0923 03:45:00.613682   11072 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:45:00.613696   11072 main.go:141] libmachine: STDERR: 
	I0923 03:45:00.613714   11072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:00.613726   11072 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:45:00.613739   11072 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:00.613762   11072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:9d:8e:09:a9:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:00.615402   11072 main.go:141] libmachine: STDOUT: 
	I0923 03:45:00.615416   11072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:00.615436   11072 client.go:171] duration metric: took 231.64075ms to LocalClient.Create
	I0923 03:45:02.617595   11072 start.go:128] duration metric: took 2.296421417s to createHost
	I0923 03:45:02.617642   11072 start.go:83] releasing machines lock for "default-k8s-diff-port-152000", held for 2.296841208s
	W0923 03:45:02.617703   11072 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:02.636985   11072 out.go:177] * Deleting "default-k8s-diff-port-152000" in qemu2 ...
	W0923 03:45:02.682105   11072 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:02.682129   11072 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:07.684256   11072 start.go:360] acquireMachinesLock for default-k8s-diff-port-152000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:07.827346   11072 start.go:364] duration metric: took 142.983ms to acquireMachinesLock for "default-k8s-diff-port-152000"
	I0923 03:45:07.827524   11072 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:45:07.827827   11072 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:45:07.843208   11072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:45:07.894874   11072 start.go:159] libmachine.API.Create for "default-k8s-diff-port-152000" (driver="qemu2")
	I0923 03:45:07.894922   11072 client.go:168] LocalClient.Create starting
	I0923 03:45:07.895001   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:45:07.895059   11072 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:07.895077   11072 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:07.895134   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:45:07.895164   11072 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:07.895177   11072 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:07.895716   11072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:45:08.104964   11072 main.go:141] libmachine: Creating SSH key...
	I0923 03:45:08.171562   11072 main.go:141] libmachine: Creating Disk image...
	I0923 03:45:08.171575   11072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:45:08.171807   11072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:08.181515   11072 main.go:141] libmachine: STDOUT: 
	I0923 03:45:08.181543   11072 main.go:141] libmachine: STDERR: 
	I0923 03:45:08.181629   11072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2 +20000M
	I0923 03:45:08.190873   11072 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:45:08.190895   11072 main.go:141] libmachine: STDERR: 
	I0923 03:45:08.190906   11072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:08.190911   11072 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:45:08.190921   11072 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:08.190947   11072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:57:55:c6:e4:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:08.192750   11072 main.go:141] libmachine: STDOUT: 
	I0923 03:45:08.192766   11072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:08.192778   11072 client.go:171] duration metric: took 297.857666ms to LocalClient.Create
	I0923 03:45:10.194937   11072 start.go:128] duration metric: took 2.367128917s to createHost
	I0923 03:45:10.195003   11072 start.go:83] releasing machines lock for "default-k8s-diff-port-152000", held for 2.367671625s
	W0923 03:45:10.195439   11072 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:10.205104   11072 out.go:201] 
	W0923 03:45:10.209132   11072 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:10.209161   11072 out.go:270] * 
	* 
	W0923 03:45:10.211957   11072 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:10.220101   11072 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (66.176792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-574000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-574000 create -f testdata/busybox.yaml: exit status 1 (30.450583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-574000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-574000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (34.352208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (33.970959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-574000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-574000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-574000 describe deploy/metrics-server -n kube-system: exit status 1 (28.043417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-574000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-574000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (30.806333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-152000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-152000 create -f testdata/busybox.yaml: exit status 1 (29.670958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-152000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (29.704458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (29.608541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-152000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-152000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-152000 describe deploy/metrics-server -n kube-system: exit status 1 (26.916125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-152000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (28.827333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186720791s)

                                                
                                                
-- stdout --
	* [embed-certs-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-574000" primary control-plane node in "embed-certs-574000" cluster
	* Restarting existing qemu2 VM for "embed-certs-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:11.349844   11151 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:11.349964   11151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:11.349968   11151 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:11.349977   11151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:11.350122   11151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:11.351168   11151 out.go:352] Setting JSON to false
	I0923 03:45:11.367186   11151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6282,"bootTime":1727082029,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:45:11.367262   11151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:45:11.372558   11151 out.go:177] * [embed-certs-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:45:11.379618   11151 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:45:11.379658   11151 notify.go:220] Checking for updates...
	I0923 03:45:11.387558   11151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:45:11.389044   11151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:45:11.392500   11151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:45:11.395525   11151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:45:11.398547   11151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:45:11.401940   11151 config.go:182] Loaded profile config "embed-certs-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:11.402194   11151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:45:11.406521   11151 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:45:11.413576   11151 start.go:297] selected driver: qemu2
	I0923 03:45:11.413581   11151 start.go:901] validating driver "qemu2" against &{Name:embed-certs-574000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:11.413652   11151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:45:11.416000   11151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:45:11.416041   11151 cni.go:84] Creating CNI manager for ""
	I0923 03:45:11.416062   11151 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:45:11.416086   11151 start.go:340] cluster config:
	{Name:embed-certs-574000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-574000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:11.419784   11151 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:45:11.428540   11151 out.go:177] * Starting "embed-certs-574000" primary control-plane node in "embed-certs-574000" cluster
	I0923 03:45:11.432553   11151 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:45:11.432567   11151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:45:11.432579   11151 cache.go:56] Caching tarball of preloaded images
	I0923 03:45:11.432652   11151 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:45:11.432660   11151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:45:11.432725   11151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/embed-certs-574000/config.json ...
	I0923 03:45:11.433216   11151 start.go:360] acquireMachinesLock for embed-certs-574000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:11.433244   11151 start.go:364] duration metric: took 22.208µs to acquireMachinesLock for "embed-certs-574000"
	I0923 03:45:11.433253   11151 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:11.433258   11151 fix.go:54] fixHost starting: 
	I0923 03:45:11.433376   11151 fix.go:112] recreateIfNeeded on embed-certs-574000: state=Stopped err=<nil>
	W0923 03:45:11.433385   11151 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:11.441584   11151 out.go:177] * Restarting existing qemu2 VM for "embed-certs-574000" ...
	I0923 03:45:11.445547   11151 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:11.445584   11151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:95:36:fa:d4:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:45:11.447616   11151 main.go:141] libmachine: STDOUT: 
	I0923 03:45:11.447638   11151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:11.447669   11151 fix.go:56] duration metric: took 14.409208ms for fixHost
	I0923 03:45:11.447673   11151 start.go:83] releasing machines lock for "embed-certs-574000", held for 14.425333ms
	W0923 03:45:11.447680   11151 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:11.447710   11151 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:11.447715   11151 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:16.449836   11151 start.go:360] acquireMachinesLock for embed-certs-574000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:16.450336   11151 start.go:364] duration metric: took 385.833µs to acquireMachinesLock for "embed-certs-574000"
	I0923 03:45:16.450470   11151 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:16.450493   11151 fix.go:54] fixHost starting: 
	I0923 03:45:16.451328   11151 fix.go:112] recreateIfNeeded on embed-certs-574000: state=Stopped err=<nil>
	W0923 03:45:16.451357   11151 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:16.460960   11151 out.go:177] * Restarting existing qemu2 VM for "embed-certs-574000" ...
	I0923 03:45:16.465908   11151 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:16.466074   11151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:95:36:fa:d4:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/embed-certs-574000/disk.qcow2
	I0923 03:45:16.475263   11151 main.go:141] libmachine: STDOUT: 
	I0923 03:45:16.475342   11151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:16.475418   11151 fix.go:56] duration metric: took 24.931ms for fixHost
	I0923 03:45:16.475432   11151 start.go:83] releasing machines lock for "embed-certs-574000", held for 25.068041ms
	W0923 03:45:16.475609   11151 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:16.481977   11151 out.go:201] 
	W0923 03:45:16.485086   11151 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:16.485126   11151 out.go:270] * 
	* 
	W0923 03:45:16.487703   11151 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:16.494964   11151 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-574000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (67.053792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.243622458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-152000" primary control-plane node in "default-k8s-diff-port-152000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:12.543278   11166 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:12.543399   11166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:12.543403   11166 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:12.543405   11166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:12.543537   11166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:12.544583   11166 out.go:352] Setting JSON to false
	I0923 03:45:12.560543   11166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6283,"bootTime":1727082029,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:45:12.560612   11166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:45:12.565933   11166 out.go:177] * [default-k8s-diff-port-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:45:12.573875   11166 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:45:12.573922   11166 notify.go:220] Checking for updates...
	I0923 03:45:12.581857   11166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:45:12.584866   11166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:45:12.587927   11166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:45:12.590858   11166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:45:12.593852   11166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:45:12.597205   11166 config.go:182] Loaded profile config "default-k8s-diff-port-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:12.597484   11166 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:45:12.601890   11166 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:45:12.608852   11166 start.go:297] selected driver: qemu2
	I0923 03:45:12.608858   11166 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:12.608905   11166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:45:12.611293   11166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 03:45:12.611328   11166 cni.go:84] Creating CNI manager for ""
	I0923 03:45:12.611353   11166 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:45:12.611392   11166 start.go:340] cluster config:
	{Name:default-k8s-diff-port-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-152000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:12.615029   11166 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:45:12.621869   11166 out.go:177] * Starting "default-k8s-diff-port-152000" primary control-plane node in "default-k8s-diff-port-152000" cluster
	I0923 03:45:12.624851   11166 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:45:12.624868   11166 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:45:12.624877   11166 cache.go:56] Caching tarball of preloaded images
	I0923 03:45:12.624954   11166 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:45:12.624959   11166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:45:12.625024   11166 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/default-k8s-diff-port-152000/config.json ...
	I0923 03:45:12.625897   11166 start.go:360] acquireMachinesLock for default-k8s-diff-port-152000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:12.625940   11166 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "default-k8s-diff-port-152000"
	I0923 03:45:12.625950   11166 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:12.625955   11166 fix.go:54] fixHost starting: 
	I0923 03:45:12.626077   11166 fix.go:112] recreateIfNeeded on default-k8s-diff-port-152000: state=Stopped err=<nil>
	W0923 03:45:12.626086   11166 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:12.630883   11166 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-152000" ...
	I0923 03:45:12.637834   11166 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:12.637869   11166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:57:55:c6:e4:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:12.639946   11166 main.go:141] libmachine: STDOUT: 
	I0923 03:45:12.639962   11166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:12.639992   11166 fix.go:56] duration metric: took 14.036042ms for fixHost
	I0923 03:45:12.639996   11166 start.go:83] releasing machines lock for "default-k8s-diff-port-152000", held for 14.052375ms
	W0923 03:45:12.640002   11166 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:12.640048   11166 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:12.640053   11166 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:17.642028   11166 start.go:360] acquireMachinesLock for default-k8s-diff-port-152000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:19.682704   11166 start.go:364] duration metric: took 2.040676542s to acquireMachinesLock for "default-k8s-diff-port-152000"
	I0923 03:45:19.682803   11166 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:19.682827   11166 fix.go:54] fixHost starting: 
	I0923 03:45:19.683602   11166 fix.go:112] recreateIfNeeded on default-k8s-diff-port-152000: state=Stopped err=<nil>
	W0923 03:45:19.683630   11166 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:19.692201   11166 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-152000" ...
	I0923 03:45:19.707243   11166 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:19.707479   11166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:57:55:c6:e4:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/default-k8s-diff-port-152000/disk.qcow2
	I0923 03:45:19.716972   11166 main.go:141] libmachine: STDOUT: 
	I0923 03:45:19.717029   11166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:19.717107   11166 fix.go:56] duration metric: took 34.28575ms for fixHost
	I0923 03:45:19.717125   11166 start.go:83] releasing machines lock for "default-k8s-diff-port-152000", held for 34.380042ms
	W0923 03:45:19.717315   11166 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:19.726159   11166 out.go:201] 
	W0923 03:45:19.730280   11166 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:19.730300   11166 out.go:270] * 
	* 
	W0923 03:45:19.732284   11166 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:19.743159   11166 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-152000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (63.196875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-574000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (32.169333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-574000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-574000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-574000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.77775ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-574000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-574000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (29.839959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-574000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (29.171333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-574000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-574000 --alsologtostderr -v=1: exit status 83 (39.408584ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-574000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-574000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:16.761716   11191 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:16.761888   11191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:16.761892   11191 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:16.761894   11191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:16.762018   11191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:16.762224   11191 out.go:352] Setting JSON to false
	I0923 03:45:16.762234   11191 mustload.go:65] Loading cluster: embed-certs-574000
	I0923 03:45:16.762441   11191 config.go:182] Loaded profile config "embed-certs-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:16.766451   11191 out.go:177] * The control-plane node embed-certs-574000 host is not running: state=Stopped
	I0923 03:45:16.769273   11191 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-574000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-574000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (29.259208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (29.295583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.10209275s)

                                                
                                                
-- stdout --
	* [newest-cni-338000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-338000" primary control-plane node in "newest-cni-338000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-338000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:17.080921   11208 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:17.081048   11208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:17.081051   11208 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:17.081053   11208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:17.081186   11208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:17.082322   11208 out.go:352] Setting JSON to false
	I0923 03:45:17.098329   11208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6288,"bootTime":1727082029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:45:17.098397   11208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:45:17.100631   11208 out.go:177] * [newest-cni-338000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:45:17.108358   11208 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:45:17.108413   11208 notify.go:220] Checking for updates...
	I0923 03:45:17.115284   11208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:45:17.118332   11208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:45:17.121281   11208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:45:17.124239   11208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:45:17.127304   11208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:45:17.129083   11208 config.go:182] Loaded profile config "default-k8s-diff-port-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:17.129144   11208 config.go:182] Loaded profile config "multinode-896000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:17.129194   11208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:45:17.133238   11208 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 03:45:17.140144   11208 start.go:297] selected driver: qemu2
	I0923 03:45:17.140149   11208 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:45:17.140161   11208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:45:17.142372   11208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0923 03:45:17.142413   11208 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0923 03:45:17.146307   11208 out.go:177] * Automatically selected the socket_vmnet network
	I0923 03:45:17.153190   11208 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 03:45:17.153207   11208 cni.go:84] Creating CNI manager for ""
	I0923 03:45:17.153235   11208 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:45:17.153245   11208 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:45:17.153273   11208 start.go:340] cluster config:
	{Name:newest-cni-338000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:17.157029   11208 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:45:17.164322   11208 out.go:177] * Starting "newest-cni-338000" primary control-plane node in "newest-cni-338000" cluster
	I0923 03:45:17.168274   11208 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:45:17.168293   11208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:45:17.168301   11208 cache.go:56] Caching tarball of preloaded images
	I0923 03:45:17.168384   11208 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:45:17.168390   11208 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:45:17.168460   11208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/newest-cni-338000/config.json ...
	I0923 03:45:17.168472   11208 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/newest-cni-338000/config.json: {Name:mk6a6df8e53ce400cecabfd113fbacaf60dbbcd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:45:17.168711   11208 start.go:360] acquireMachinesLock for newest-cni-338000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:17.168749   11208 start.go:364] duration metric: took 31.875µs to acquireMachinesLock for "newest-cni-338000"
	I0923 03:45:17.168764   11208 start.go:93] Provisioning new machine with config: &{Name:newest-cni-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:45:17.168792   11208 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:45:17.177263   11208 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:45:17.195514   11208 start.go:159] libmachine.API.Create for "newest-cni-338000" (driver="qemu2")
	I0923 03:45:17.195538   11208 client.go:168] LocalClient.Create starting
	I0923 03:45:17.195609   11208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:45:17.195644   11208 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:17.195653   11208 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:17.195690   11208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:45:17.195715   11208 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:17.195722   11208 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:17.196066   11208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:45:17.362070   11208 main.go:141] libmachine: Creating SSH key...
	I0923 03:45:17.660795   11208 main.go:141] libmachine: Creating Disk image...
	I0923 03:45:17.660801   11208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:45:17.661022   11208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:17.670437   11208 main.go:141] libmachine: STDOUT: 
	I0923 03:45:17.670466   11208 main.go:141] libmachine: STDERR: 
	I0923 03:45:17.670545   11208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2 +20000M
	I0923 03:45:17.678488   11208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:45:17.678521   11208 main.go:141] libmachine: STDERR: 
	I0923 03:45:17.678540   11208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:17.678551   11208 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:45:17.678562   11208 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:17.678599   11208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:64:93:d0:59:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:17.680246   11208 main.go:141] libmachine: STDOUT: 
	I0923 03:45:17.680257   11208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:17.680278   11208 client.go:171] duration metric: took 484.74575ms to LocalClient.Create
	I0923 03:45:19.682440   11208 start.go:128] duration metric: took 2.513682416s to createHost
	I0923 03:45:19.682551   11208 start.go:83] releasing machines lock for "newest-cni-338000", held for 2.51381125s
	W0923 03:45:19.682613   11208 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:19.703248   11208 out.go:177] * Deleting "newest-cni-338000" in qemu2 ...
	W0923 03:45:19.765856   11208 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:19.765893   11208 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:24.768109   11208 start.go:360] acquireMachinesLock for newest-cni-338000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:24.768601   11208 start.go:364] duration metric: took 362.958µs to acquireMachinesLock for "newest-cni-338000"
	I0923 03:45:24.768731   11208 start.go:93] Provisioning new machine with config: &{Name:newest-cni-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 03:45:24.769019   11208 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 03:45:24.772021   11208 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 03:45:24.822735   11208 start.go:159] libmachine.API.Create for "newest-cni-338000" (driver="qemu2")
	I0923 03:45:24.822768   11208 client.go:168] LocalClient.Create starting
	I0923 03:45:24.822907   11208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/ca.pem
	I0923 03:45:24.822981   11208 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:24.823006   11208 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:24.823089   11208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19689-6600/.minikube/certs/cert.pem
	I0923 03:45:24.823136   11208 main.go:141] libmachine: Decoding PEM data...
	I0923 03:45:24.823173   11208 main.go:141] libmachine: Parsing certificate...
	I0923 03:45:24.823945   11208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 03:45:24.996891   11208 main.go:141] libmachine: Creating SSH key...
	I0923 03:45:25.090496   11208 main.go:141] libmachine: Creating Disk image...
	I0923 03:45:25.090502   11208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 03:45:25.090744   11208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2.raw /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:25.100187   11208 main.go:141] libmachine: STDOUT: 
	I0923 03:45:25.100204   11208 main.go:141] libmachine: STDERR: 
	I0923 03:45:25.100265   11208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2 +20000M
	I0923 03:45:25.108214   11208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 03:45:25.108229   11208 main.go:141] libmachine: STDERR: 
	I0923 03:45:25.108238   11208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:25.108251   11208 main.go:141] libmachine: Starting QEMU VM...
	I0923 03:45:25.108260   11208 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:25.108296   11208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7a:e7:60:6c:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:25.109995   11208 main.go:141] libmachine: STDOUT: 
	I0923 03:45:25.110009   11208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:25.110022   11208 client.go:171] duration metric: took 287.25575ms to LocalClient.Create
	I0923 03:45:27.112295   11208 start.go:128] duration metric: took 2.343270083s to createHost
	I0923 03:45:27.112363   11208 start.go:83] releasing machines lock for "newest-cni-338000", held for 2.343783083s
	W0923 03:45:27.112739   11208 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:27.121049   11208 out.go:201] 
	W0923 03:45:27.130307   11208 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:27.130334   11208 out.go:270] * 
	* 
	W0923 03:45:27.132840   11208 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:27.141247   11208 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (67.471042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-338000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-152000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (31.970375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-152000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.543125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (29.376291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-152000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (29.834709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-152000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-152000 --alsologtostderr -v=1: exit status 83 (47.7285ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-152000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-152000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:20.009189   11230 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:20.009339   11230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:20.009342   11230 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:20.009344   11230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:20.009474   11230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:20.009706   11230 out.go:352] Setting JSON to false
	I0923 03:45:20.009717   11230 mustload.go:65] Loading cluster: default-k8s-diff-port-152000
	I0923 03:45:20.009935   11230 config.go:182] Loaded profile config "default-k8s-diff-port-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:20.014631   11230 out.go:177] * The control-plane node default-k8s-diff-port-152000 host is not running: state=Stopped
	I0923 03:45:20.024819   11230 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-152000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-152000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (30.654458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (28.969625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.1873585s)

                                                
                                                
-- stdout --
	* [newest-cni-338000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-338000" primary control-plane node in "newest-cni-338000" cluster
	* Restarting existing qemu2 VM for "newest-cni-338000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-338000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:31.356758   11288 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:31.356873   11288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:31.356876   11288 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:31.356879   11288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:31.356992   11288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:31.358032   11288 out.go:352] Setting JSON to false
	I0923 03:45:31.374319   11288 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6302,"bootTime":1727082029,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:45:31.374387   11288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:45:31.379021   11288 out.go:177] * [newest-cni-338000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:45:31.385958   11288 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:45:31.386004   11288 notify.go:220] Checking for updates...
	I0923 03:45:31.392002   11288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:45:31.393507   11288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:45:31.397025   11288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:45:31.399994   11288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:45:31.403010   11288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:45:31.406302   11288 config.go:182] Loaded profile config "newest-cni-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:31.406565   11288 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:45:31.410985   11288 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:45:31.418001   11288 start.go:297] selected driver: qemu2
	I0923 03:45:31.418006   11288 start.go:901] validating driver "qemu2" against &{Name:newest-cni-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:31.418064   11288 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:45:31.420435   11288 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 03:45:31.420455   11288 cni.go:84] Creating CNI manager for ""
	I0923 03:45:31.420479   11288 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:45:31.420507   11288 start.go:340] cluster config:
	{Name:newest-cni-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-338000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:45:31.424112   11288 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:45:31.433008   11288 out.go:177] * Starting "newest-cni-338000" primary control-plane node in "newest-cni-338000" cluster
	I0923 03:45:31.437014   11288 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:45:31.437031   11288 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:45:31.437037   11288 cache.go:56] Caching tarball of preloaded images
	I0923 03:45:31.437114   11288 preload.go:172] Found /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 03:45:31.437120   11288 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 03:45:31.437185   11288 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/newest-cni-338000/config.json ...
	I0923 03:45:31.437669   11288 start.go:360] acquireMachinesLock for newest-cni-338000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:31.437697   11288 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "newest-cni-338000"
	I0923 03:45:31.437707   11288 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:31.437712   11288 fix.go:54] fixHost starting: 
	I0923 03:45:31.437832   11288 fix.go:112] recreateIfNeeded on newest-cni-338000: state=Stopped err=<nil>
	W0923 03:45:31.437840   11288 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:31.441972   11288 out.go:177] * Restarting existing qemu2 VM for "newest-cni-338000" ...
	I0923 03:45:31.449986   11288 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:31.450019   11288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7a:e7:60:6c:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:31.452187   11288 main.go:141] libmachine: STDOUT: 
	I0923 03:45:31.452208   11288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:31.452246   11288 fix.go:56] duration metric: took 14.532584ms for fixHost
	I0923 03:45:31.452251   11288 start.go:83] releasing machines lock for "newest-cni-338000", held for 14.5505ms
	W0923 03:45:31.452259   11288 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:31.452291   11288 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:31.452296   11288 start.go:729] Will try again in 5 seconds ...
	I0923 03:45:36.454442   11288 start.go:360] acquireMachinesLock for newest-cni-338000: {Name:mkcef9bc5c5dc8a2f6e7b4d091e2192881eeca10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 03:45:36.454859   11288 start.go:364] duration metric: took 311.625µs to acquireMachinesLock for "newest-cni-338000"
	I0923 03:45:36.454997   11288 start.go:96] Skipping create...Using existing machine configuration
	I0923 03:45:36.455023   11288 fix.go:54] fixHost starting: 
	I0923 03:45:36.455796   11288 fix.go:112] recreateIfNeeded on newest-cni-338000: state=Stopped err=<nil>
	W0923 03:45:36.455825   11288 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 03:45:36.465344   11288 out.go:177] * Restarting existing qemu2 VM for "newest-cni-338000" ...
	I0923 03:45:36.469368   11288 qemu.go:418] Using hvf for hardware acceleration
	I0923 03:45:36.469603   11288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7a:e7:60:6c:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19689-6600/.minikube/machines/newest-cni-338000/disk.qcow2
	I0923 03:45:36.479437   11288 main.go:141] libmachine: STDOUT: 
	I0923 03:45:36.479538   11288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 03:45:36.479692   11288 fix.go:56] duration metric: took 24.670542ms for fixHost
	I0923 03:45:36.479716   11288 start.go:83] releasing machines lock for "newest-cni-338000", held for 24.832458ms
	W0923 03:45:36.479962   11288 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-338000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-338000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 03:45:36.488386   11288 out.go:201] 
	W0923 03:45:36.492408   11288 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 03:45:36.492432   11288 out.go:270] * 
	* 
	W0923 03:45:36.495058   11288 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:45:36.502343   11288 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-338000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (68.66ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-338000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-338000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (31.135666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-338000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-338000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-338000 --alsologtostderr -v=1: exit status 83 (43.58625ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-338000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:45:36.688374   11306 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:45:36.688536   11306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:36.688539   11306 out.go:358] Setting ErrFile to fd 2...
	I0923 03:45:36.688542   11306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:45:36.688677   11306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:45:36.688933   11306 out.go:352] Setting JSON to false
	I0923 03:45:36.688943   11306 mustload.go:65] Loading cluster: newest-cni-338000
	I0923 03:45:36.689183   11306 config.go:182] Loaded profile config "newest-cni-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:45:36.693676   11306 out.go:177] * The control-plane node newest-cni-338000 host is not running: state=Stopped
	I0923 03:45:36.697616   11306 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-338000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-338000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (30.759333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-338000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (30.550375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-338000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 7.27
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.69
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.76
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.76
55 TestFunctional/serial/CacheCmd/cache/add_local 1.08
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.23
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.83
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.14
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 0.93
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.42
258 TestNoKubernetes/serial/Stop 1.9
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
275 TestStartStop/group/old-k8s-version/serial/Stop 2.05
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
286 TestStartStop/group/no-preload/serial/Stop 2.68
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 3.04
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.89
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.92
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 03:19:20.439761    7121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 03:19:20.440168    7121 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-437000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-437000: exit status 85 (94.769292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |          |
	|         | -p download-only-437000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 03:19:05
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 03:19:05.270721    7122 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:19:05.270881    7122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:05.270885    7122 out.go:358] Setting ErrFile to fd 2...
	I0923 03:19:05.270887    7122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:05.271033    7122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	W0923 03:19:05.271122    7122 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19689-6600/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19689-6600/.minikube/config/config.json: no such file or directory
	I0923 03:19:05.272516    7122 out.go:352] Setting JSON to true
	I0923 03:19:05.289805    7122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4716,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:19:05.289872    7122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:19:05.296262    7122 out.go:97] [download-only-437000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:19:05.296381    7122 notify.go:220] Checking for updates...
	W0923 03:19:05.296448    7122 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 03:19:05.299992    7122 out.go:169] MINIKUBE_LOCATION=19689
	I0923 03:19:05.303456    7122 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:19:05.309290    7122 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:19:05.313309    7122 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:19:05.317189    7122 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	W0923 03:19:05.329274    7122 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 03:19:05.329498    7122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:19:05.332370    7122 out.go:97] Using the qemu2 driver based on user configuration
	I0923 03:19:05.332388    7122 start.go:297] selected driver: qemu2
	I0923 03:19:05.332397    7122 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:19:05.332474    7122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:19:05.335274    7122 out.go:169] Automatically selected the socket_vmnet network
	I0923 03:19:05.340848    7122 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 03:19:05.340961    7122 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:19:05.341022    7122 cni.go:84] Creating CNI manager for ""
	I0923 03:19:05.341068    7122 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 03:19:05.341134    7122 start.go:340] cluster config:
	{Name:download-only-437000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:19:05.345122    7122 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:19:05.349296    7122 out.go:97] Downloading VM boot image ...
	I0923 03:19:05.349316    7122 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0923 03:19:11.806673    7122 out.go:97] Starting "download-only-437000" primary control-plane node in "download-only-437000" cluster
	I0923 03:19:11.806696    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:11.864375    7122 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:19:11.864389    7122 cache.go:56] Caching tarball of preloaded images
	I0923 03:19:11.864561    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:11.868716    7122 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 03:19:11.868722    7122 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:11.943638    7122 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 03:19:19.129330    7122 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:19.129505    7122 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:19.825648    7122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 03:19:19.825844    7122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/download-only-437000/config.json ...
	I0923 03:19:19.825861    7122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19689-6600/.minikube/profiles/download-only-437000/config.json: {Name:mk2eb9b3f2689a5995386bf57780e3d7152cac5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 03:19:19.826085    7122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 03:19:19.827115    7122 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 03:19:20.386483    7122 out.go:193] 
	W0923 03:19:20.393353    7122 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19689-6600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0 0x104d396c0] Decompressors:map[bz2:0x14000a057f0 gz:0x14000a057f8 tar:0x14000a057a0 tar.bz2:0x14000a057b0 tar.gz:0x14000a057c0 tar.xz:0x14000a057d0 tar.zst:0x14000a057e0 tbz2:0x14000a057b0 tgz:0x14000a057c0 txz:0x14000a057d0 tzst:0x14000a057e0 xz:0x14000a05800 zip:0x14000a05810 zst:0x14000a05808] Getters:map[file:0x140000647a0 http:0x140008b6140 https:0x140008b6190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 03:19:20.393382    7122 out_reason.go:110] 
	W0923 03:19:20.403427    7122 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 03:19:20.406345    7122 out.go:193] 
	
	
	* The control-plane node download-only-437000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-437000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-437000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-406000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-406000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.272862708s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 03:19:28.069390    7121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 03:19:28.069453    7121 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-406000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-406000: exit status 85 (79.640916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | -p download-only-437000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| delete  | -p download-only-437000        | download-only-437000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT | 23 Sep 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-406000 | jenkins | v1.34.0 | 23 Sep 24 03:19 PDT |                     |
	|         | -p download-only-406000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 03:19:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 03:19:20.824319    7146 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:19:20.824425    7146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:20.824429    7146 out.go:358] Setting ErrFile to fd 2...
	I0923 03:19:20.824431    7146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:19:20.824557    7146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:19:20.825619    7146 out.go:352] Setting JSON to true
	I0923 03:19:20.841832    7146 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4731,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:19:20.841909    7146 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:19:20.846555    7146 out.go:97] [download-only-406000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:19:20.846654    7146 notify.go:220] Checking for updates...
	I0923 03:19:20.850491    7146 out.go:169] MINIKUBE_LOCATION=19689
	I0923 03:19:20.853537    7146 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:19:20.857510    7146 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:19:20.860535    7146 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:19:20.863543    7146 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	W0923 03:19:20.869521    7146 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 03:19:20.869719    7146 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:19:20.872443    7146 out.go:97] Using the qemu2 driver based on user configuration
	I0923 03:19:20.872452    7146 start.go:297] selected driver: qemu2
	I0923 03:19:20.872455    7146 start.go:901] validating driver "qemu2" against <nil>
	I0923 03:19:20.872497    7146 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 03:19:20.875512    7146 out.go:169] Automatically selected the socket_vmnet network
	I0923 03:19:20.880756    7146 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 03:19:20.880899    7146 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 03:19:20.880920    7146 cni.go:84] Creating CNI manager for ""
	I0923 03:19:20.880944    7146 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 03:19:20.880949    7146 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 03:19:20.881005    7146 start.go:340] cluster config:
	{Name:download-only-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:19:20.884524    7146 iso.go:125] acquiring lock: {Name:mka63ef2224982b31adee7c75ddda252a8d82721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 03:19:20.886012    7146 out.go:97] Starting "download-only-406000" primary control-plane node in "download-only-406000" cluster
	I0923 03:19:20.886021    7146 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:19:20.939740    7146 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 03:19:20.939753    7146 cache.go:56] Caching tarball of preloaded images
	I0923 03:19:20.939987    7146 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 03:19:20.943385    7146 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 03:19:20.943397    7146 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0923 03:19:21.017448    7146 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19689-6600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-406000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-406000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-406000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-858000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-858000: exit status 85 (59.327583ms)

                                                
                                                
-- stdout --
	* Profile "addons-858000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-858000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-858000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-858000: exit status 85 (63.238459ms)

                                                
                                                
-- stdout --
	* Profile "addons-858000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-858000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.69s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0923 03:30:53.701409    7121 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 03:30:53.701570    7121 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0923 03:30:55.607301    7121 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0923 03:30:55.607522    7121 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 03:30:55.607580    7121 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit
I0923 03:30:56.110804    7121 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40 0x108ee6d40] Decompressors:map[bz2:0x1400012b5d0 gz:0x1400012b5d8 tar:0x1400012b580 tar.bz2:0x1400012b590 tar.gz:0x1400012b5a0 tar.xz:0x1400012b5b0 tar.zst:0x1400012b5c0 tbz2:0x1400012b590 tgz:0x1400012b5a0 txz:0x1400012b5b0 tzst:0x1400012b5c0 xz:0x1400012b5e0 zip:0x1400012b5f0 zst:0x1400012b5e8] Getters:map[file:0x140012e2470 http:0x1400082b4f0 https:0x1400082b540] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 03:30:56.110929    7121 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1570438969/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status: exit status 7 (31.9495ms)

                                                
                                                
-- stdout --
	nospam-177000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status: exit status 7 (30.124917ms)

                                                
                                                
-- stdout --
	nospam-177000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status: exit status 7 (30.387791ms)

                                                
                                                
-- stdout --
	nospam-177000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause: exit status 83 (38.584292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause: exit status 83 (39.968583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause: exit status 83 (40.832209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause: exit status 83 (40.753ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause: exit status 83 (38.278792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause: exit status 83 (39.327333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-177000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop: (3.206725375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop: (3.202847167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-177000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-177000 stop: (3.351117292s)
--- PASS: TestErrorSpam/stop (9.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19689-6600/.minikube/files/etc/test/nested/copy/7121/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local992262224/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache add minikube-local-cache-test:functional-824000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 cache delete minikube-local-cache-test:functional-824000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-824000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 config get cpus: exit status 14 (31.618625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 config get cpus: exit status 14 (32.516667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-824000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (163.302ms)

                                                
                                                
-- stdout --
	* [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:21:13.171471    7719 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:21:13.171679    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.171684    7719 out.go:358] Setting ErrFile to fd 2...
	I0923 03:21:13.171687    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.171878    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:21:13.173304    7719 out.go:352] Setting JSON to false
	I0923 03:21:13.193200    7719 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4844,"bootTime":1727082029,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:21:13.193270    7719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:21:13.197230    7719 out.go:177] * [functional-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 03:21:13.205268    7719 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:21:13.205334    7719 notify.go:220] Checking for updates...
	I0923 03:21:13.213231    7719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:21:13.217070    7719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:21:13.220331    7719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:21:13.223197    7719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:21:13.226262    7719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:21:13.229519    7719 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:21:13.229811    7719 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:21:13.234237    7719 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 03:21:13.241124    7719 start.go:297] selected driver: qemu2
	I0923 03:21:13.241128    7719 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:21:13.241170    7719 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:21:13.248240    7719 out.go:201] 
	W0923 03:21:13.252203    7719 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 03:21:13.256200    7719 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-824000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-824000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.748208ms)

                                                
                                                
-- stdout --
	* [functional-824000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 03:21:13.405003    7730 out.go:345] Setting OutFile to fd 1 ...
	I0923 03:21:13.405126    7730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.405130    7730 out.go:358] Setting ErrFile to fd 2...
	I0923 03:21:13.405132    7730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 03:21:13.405276    7730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19689-6600/.minikube/bin
	I0923 03:21:13.406699    7730 out.go:352] Setting JSON to false
	I0923 03:21:13.423665    7730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4844,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 03:21:13.423759    7730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 03:21:13.428288    7730 out.go:177] * [functional-824000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0923 03:21:13.435236    7730 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 03:21:13.435309    7730 notify.go:220] Checking for updates...
	I0923 03:21:13.442196    7730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	I0923 03:21:13.445192    7730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 03:21:13.448247    7730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 03:21:13.451225    7730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	I0923 03:21:13.454192    7730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 03:21:13.457544    7730 config.go:182] Loaded profile config "functional-824000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 03:21:13.457800    7730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 03:21:13.462134    7730 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0923 03:21:13.469227    7730 start.go:297] selected driver: qemu2
	I0923 03:21:13.469233    7730 start.go:901] validating driver "qemu2" against &{Name:functional-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 03:21:13.469296    7730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 03:21:13.475215    7730 out.go:201] 
	W0923 03:21:13.479264    7730 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 03:21:13.482178    7730 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.795787041s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-824000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image rm kicbase/echo-server:functional-824000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-824000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 image save --daemon kicbase/echo-server:functional-824000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-824000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "45.80225ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.889583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "46.080084ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.712208ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013993083s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-824000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-824000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-824000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-824000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser: (3.141675083s)
--- PASS: TestJSONOutput/stop/Command (3.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-887000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-887000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.852583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a70ba477-6a72-4fa9-80f7-b0e184c0638d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-887000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c82b9e8-215e-4fe4-8ab5-6ac279b6c112","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"9f078e66-f790-4027-88df-09ef180f8b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig"}}
	{"specversion":"1.0","id":"b6bc7265-dd98-461a-a0ac-29ea02d73eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8de42f6b-bb3d-43fe-a31c-42ca361da7c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b2aee05d-b844-407c-b750-c6f077ed9279","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube"}}
	{"specversion":"1.0","id":"5f79bb65-0746-4ad1-94a6-862a68cc8b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b65c8e7-fd8f-4206-b22e-0afca7486a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-887000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-887000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-346000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.783083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19689-6600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19689-6600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-346000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-346000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.990458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-346000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-346000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.671202625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.749864458s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-346000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-346000: (1.902267542s)
--- PASS: TestNoKubernetes/serial/Stop (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-346000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-346000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.347209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-346000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-346000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-516000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-937000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-937000 --alsologtostderr -v=3: (2.049686833s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-937000 -n old-k8s-version-937000: exit status 7 (56.244083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-937000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-984000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-984000 --alsologtostderr -v=3: (2.681277375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-984000 -n no-preload-984000: exit status 7 (51.867333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-984000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-574000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-574000 --alsologtostderr -v=3: (3.035850792s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-152000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-152000 --alsologtostderr -v=3: (1.886410333s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-574000 -n embed-certs-574000: exit status 7 (58.871291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-574000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-152000 -n default-k8s-diff-port-152000: exit status 7 (55.795542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-152000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-338000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-338000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-338000 --alsologtostderr -v=3: (3.918723708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-338000 -n newest-cni-338000: exit status 7 (56.053208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-338000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2474995779/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727086831971977000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2474995779/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727086831971977000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2474995779/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727086831971977000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2474995779/001/test-1727086831971977000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.733125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:32.031019    7121 retry.go:31] will retry after 275.133495ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
I0923 03:20:32.327942    7121 retry.go:31] will retry after 6.487074755s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.612709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:32.394039    7121 retry.go:31] will retry after 854.284028ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.501958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:33.345284    7121 retry.go:31] will retry after 1.424400486s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.092125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:34.858147    7121 retry.go:31] will retry after 2.501592037s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.881ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:37.450136    7121 retry.go:31] will retry after 3.73293301s: exit status 83
I0923 03:20:38.817226    7121 retry.go:31] will retry after 7.032158629s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.848583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:41.269268    7121 retry.go:31] will retry after 3.439881764s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.75325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo umount -f /mount-9p": exit status 83 (47.885542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2474995779/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1978455607/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.832458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:45.027424    7121 retry.go:31] will retry after 273.334253ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.715042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:45.389832    7121 retry.go:31] will retry after 379.04515ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
I0923 03:20:45.849806    7121 retry.go:31] will retry after 13.068344005s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.566958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:45.860755    7121 retry.go:31] will retry after 1.05277892s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.178ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:47.001176    7121 retry.go:31] will retry after 1.744508445s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.309625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:48.836388    7121 retry.go:31] will retry after 1.625348499s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.221791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:50.549380    7121 retry.go:31] will retry after 4.313302656s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.546084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:54.949498    7121 retry.go:31] will retry after 4.621956564s: exit status 83
I0923 03:20:58.920404    7121 retry.go:31] will retry after 11.227721745s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.287291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "sudo umount -f /mount-9p": exit status 83 (47.461ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-824000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1978455607/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (77.951542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:20:59.907322    7121 retry.go:31] will retry after 708.80485ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (85.021292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:21:00.703619    7121 retry.go:31] will retry after 620.990635ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (87.296541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:21:01.414402    7121 retry.go:31] will retry after 1.655308896s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (82.750417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:21:03.154830    7121 retry.go:31] will retry after 1.056492285s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (88.257959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:21:04.301983    7121 retry.go:31] will retry after 2.616784549s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (86.216ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
I0923 03:21:07.007262    7121 retry.go:31] will retry after 5.624165967s: exit status 83
I0923 03:21:10.150272    7121 retry.go:31] will retry after 28.333856928s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-824000 ssh "findmnt -T" /mount1: exit status 83 (88.497209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-824000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-824000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1588174151/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.28s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-165000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-165000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-165000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-165000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165000"

                                                
                                                
----------------------- debugLogs end: cilium-165000 [took: 2.2049365s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-165000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-165000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-088000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard