Test Report: QEMU_macOS 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.62
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.15
27 TestAddons/Setup 10.27
28 TestCertOptions 10.21
29 TestCertExpiration 195.44
30 TestDockerFlags 10.23
31 TestForceSystemdFlag 10.48
32 TestForceSystemdEnv 10.1
38 TestErrorSpam/setup 9.96
47 TestFunctional/serial/StartWithProxy 9.93
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 2.19
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.04
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 84.78
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 25.38
141 TestMultiControlPlane/serial/StartCluster 9.87
142 TestMultiControlPlane/serial/DeployApp 87.03
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 42
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.92
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.12
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 10.07
165 TestJSONOutput/start/Command 9.98
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.34
197 TestMountStart/serial/StartWithMountFirst 10.01
200 TestMultiNode/serial/FreshStart2Nodes 9.89
201 TestMultiNode/serial/DeployApp2Nodes 112.76
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 51.06
209 TestMultiNode/serial/RestartKeepsNodes 9.15
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.5
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.19
217 TestPreload 9.99
219 TestScheduledStopUnix 10.1
220 TestSkaffold 12.3
223 TestRunningBinaryUpgrade 605.36
225 TestKubernetesUpgrade 18.64
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.06
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1
241 TestStoppedBinaryUpgrade/Upgrade 573.97
243 TestPause/serial/Start 9.91
253 TestNoKubernetes/serial/StartWithK8s 9.98
254 TestNoKubernetes/serial/StartWithStopK8s 5.28
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.36
261 TestNetworkPlugins/group/auto/Start 10.12
262 TestNetworkPlugins/group/kindnet/Start 10.05
263 TestNetworkPlugins/group/calico/Start 9.98
264 TestNetworkPlugins/group/custom-flannel/Start 9.92
265 TestNetworkPlugins/group/false/Start 9.78
266 TestNetworkPlugins/group/enable-default-cni/Start 9.9
267 TestNetworkPlugins/group/flannel/Start 9.92
268 TestNetworkPlugins/group/bridge/Start 9.84
269 TestNetworkPlugins/group/kubenet/Start 10.05
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.85
285 TestStartStop/group/embed-certs/serial/FirstStart 9.93
286 TestStartStop/group/no-preload/serial/DeployApp 0.09
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
290 TestStartStop/group/embed-certs/serial/DeployApp 0.09
291 TestStartStop/group/no-preload/serial/SecondStart 5.29
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.16
295 TestStartStop/group/embed-certs/serial/SecondStart 5.27
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/no-preload/serial/Pause 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.19
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/embed-certs/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 9.83
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-195000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-195000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.614773333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a83c7aa-653c-488c-97ae-3d0f6d6c8ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-195000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ff15aff-7ffb-4fbf-a4b8-59c9b993a725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"384328ba-4fda-4004-8838-fa3db260ac0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig"}}
	{"specversion":"1.0","id":"05c331a1-1d08-483d-b5ce-6e8fb609a4ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"fa43c909-4f9c-46f2-bf4d-2c8851530b92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2ccc8cc-39dd-41b7-a961-f56b0dd80ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube"}}
	{"specversion":"1.0","id":"421a97e2-2bba-4eb7-99ae-99c793fe4191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"7aab263a-3dcf-42a0-9a90-1efb5465cbcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"62fc4963-6e72-4917-be9b-9df158ef4438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0021dc1d-ba44-440e-9361-c556d40f562e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b47d76b9-30c1-4bf1-a0ac-065d6694cc39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-195000\" primary control-plane node in \"download-only-195000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7430096-b5bc-4272-a398-79668793c2f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"394d47b5-b938-4316-a423-8ccf96e56b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0] Decompressors:map[bz2:0x14000539d00 gz:0x14000539d08 tar:0x14000539cb0 tar.bz2:0x14000539cc0 tar.gz:0x14000539cd0 tar.xz:0x14000539ce0 tar.zst:0x14000539cf0 tbz2:0x14000539cc0 tgz:0x14
000539cd0 txz:0x14000539ce0 tzst:0x14000539cf0 xz:0x14000539d10 zip:0x14000539d20 zst:0x14000539d18] Getters:map[file:0x1400150e610 http:0x14000b94550 https:0x14000b945a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c069bbf0-300f-4d28-bb79-5114e8004ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:23.300929    7192 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:23.301083    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:23.301086    7192 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:23.301089    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:23.301252    7192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	W0920 10:38:23.301351    7192 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19678-6679/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19678-6679/.minikube/config/config.json: no such file or directory
	I0920 10:38:23.302708    7192 out.go:352] Setting JSON to true
	I0920 10:38:23.320841    7192 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4066,"bootTime":1726849837,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:38:23.320911    7192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:38:23.323980    7192 out.go:97] [download-only-195000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:38:23.324135    7192 notify.go:220] Checking for updates...
	W0920 10:38:23.324166    7192 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 10:38:23.327581    7192 out.go:169] MINIKUBE_LOCATION=19678
	I0920 10:38:23.332641    7192 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:38:23.336585    7192 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:38:23.339630    7192 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:38:23.342582    7192 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	W0920 10:38:23.347588    7192 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:38:23.347845    7192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:38:23.351588    7192 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:38:23.351607    7192 start.go:297] selected driver: qemu2
	I0920 10:38:23.351611    7192 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:38:23.351715    7192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:38:23.354566    7192 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:38:23.360078    7192 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:38:23.360187    7192 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:38:23.360244    7192 cni.go:84] Creating CNI manager for ""
	I0920 10:38:23.360291    7192 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:38:23.360345    7192 start.go:340] cluster config:
	{Name:download-only-195000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:38:23.364097    7192 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:38:23.368658    7192 out.go:97] Downloading VM boot image ...
	I0920 10:38:23.368675    7192 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0920 10:38:28.117652    7192 out.go:97] Starting "download-only-195000" primary control-plane node in "download-only-195000" cluster
	I0920 10:38:28.117678    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:28.184555    7192 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:38:28.184562    7192 cache.go:56] Caching tarball of preloaded images
	I0920 10:38:28.184742    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:28.189898    7192 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 10:38:28.189908    7192 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:28.279568    7192 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:38:33.582326    7192 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:33.582500    7192 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:34.277933    7192 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:38:34.278147    7192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-195000/config.json ...
	I0920 10:38:34.278167    7192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-195000/config.json: {Name:mk3e12fefb3ec8be2d7682ae7e0695fdf0524380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:38:34.278407    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:34.279249    7192 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 10:38:34.829778    7192 out.go:193] 
	W0920 10:38:34.840919    7192 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0] Decompressors:map[bz2:0x14000539d00 gz:0x14000539d08 tar:0x14000539cb0 tar.bz2:0x14000539cc0 tar.gz:0x14000539cd0 tar.xz:0x14000539ce0 tar.zst:0x14000539cf0 tbz2:0x14000539cc0 tgz:0x14000539cd0 txz:0x14000539ce0 tzst:0x14000539cf0 xz:0x14000539d10 zip:0x14000539d20 zst:0x14000539d18] Getters:map[file:0x1400150e610 http:0x14000b94550 https:0x14000b945a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 10:38:34.840946    7192 out_reason.go:110] 
	W0920 10:38:34.848753    7192 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:34.851793    7192 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-195000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 10:38:42.269769    7191 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-158000 --alsologtostderr --binary-mirror http://127.0.0.1:51061 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-158000 --alsologtostderr --binary-mirror http://127.0.0.1:51061 --driver=qemu2 : exit status 40 (164.457916ms)

                                                
                                                
-- stdout --
	* [binary-mirror-158000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-158000" primary control-plane node in "binary-mirror-158000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:42.329754    7252 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:42.329867    7252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:42.329870    7252 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:42.329873    7252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:42.330006    7252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:38:42.331070    7252 out.go:352] Setting JSON to false
	I0920 10:38:42.347134    7252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4085,"bootTime":1726849837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:38:42.347203    7252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:38:42.352273    7252 out.go:177] * [binary-mirror-158000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:38:42.360275    7252 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:38:42.360332    7252 notify.go:220] Checking for updates...
	I0920 10:38:42.365774    7252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:38:42.370994    7252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:38:42.374297    7252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:38:42.377267    7252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:38:42.380527    7252 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:38:42.385287    7252 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:38:42.392180    7252 start.go:297] selected driver: qemu2
	I0920 10:38:42.392185    7252 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:38:42.392229    7252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:38:42.395224    7252 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:38:42.400603    7252 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:38:42.400705    7252 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:38:42.400724    7252 cni.go:84] Creating CNI manager for ""
	I0920 10:38:42.400761    7252 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:38:42.400768    7252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:38:42.400821    7252 start.go:340] cluster config:
	{Name:binary-mirror-158000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:51061 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:38:42.404517    7252 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:38:42.413262    7252 out.go:177] * Starting "binary-mirror-158000" primary control-plane node in "binary-mirror-158000" cluster
	I0920 10:38:42.416233    7252 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:42.416251    7252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:38:42.416263    7252 cache.go:56] Caching tarball of preloaded images
	I0920 10:38:42.416338    7252 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:38:42.416352    7252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:38:42.416561    7252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/binary-mirror-158000/config.json ...
	I0920 10:38:42.416573    7252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/binary-mirror-158000/config.json: {Name:mka8edac987028c5f30d63c8482c6df509dda872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:38:42.416981    7252 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:42.417039    7252 download.go:107] Downloading: http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0920 10:38:42.439384    7252 out.go:201] 
	W0920 10:38:42.443208    7252 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0] Decompressors:map[bz2:0x1400078b410 gz:0x1400078b418 tar:0x1400078b3c0 tar.bz2:0x1400078b3d0 tar.gz:0x1400078b3e0 tar.xz:0x1400078b3f0 tar.zst:0x1400078b400 tbz2:0x1400078b3d0 tgz:0x1400078b3e0 txz:0x1400078b3f0 tzst:0x1400078b400 xz:0x1400078b420 zip:0x1400078b430 zst:0x1400078b428] Getters:map[file:0x14000502100 http:0x14000bece60 https:0x14000beceb0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51061/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0 0x1069716c0] Decompressors:map[bz2:0x1400078b410 gz:0x1400078b418 tar:0x1400078b3c0 tar.bz2:0x1400078b3d0 tar.gz:0x1400078b3e0 tar.xz:0x1400078b3f0 tar.zst:0x1400078b400 tbz2:0x1400078b3d0 tgz:0x1400078b3e0 txz:0x1400078b3f0 tzst:0x1400078b400 xz:0x1400078b420 zip:0x1400078b430 zst:0x1400078b428] Getters:map[file:0x14000502100 http:0x14000bece60 https:0x14000beceb0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0920 10:38:42.443216    7252 out.go:270] * 
	* 
	W0920 10:38:42.443708    7252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:42.459307    7252 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-158000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:51061" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-158000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.15s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-759000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-759000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.993678875s)

                                                
                                                
-- stdout --
	* [offline-docker-759000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-759000" primary control-plane node in "offline-docker-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:49:22.920574    8602 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:22.920721    8602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:22.920724    8602 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:22.920727    8602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:22.920869    8602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:49:22.922157    8602 out.go:352] Setting JSON to false
	I0920 10:49:22.939651    8602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4725,"bootTime":1726849837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:49:22.939770    8602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:22.944862    8602 out.go:177] * [offline-docker-759000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:22.952004    8602 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:49:22.952028    8602 notify.go:220] Checking for updates...
	I0920 10:49:22.959889    8602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:49:22.962879    8602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:22.965854    8602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:22.968891    8602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:49:22.971911    8602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:49:22.975222    8602 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:22.975277    8602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:22.978833    8602 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:49:22.985809    8602 start.go:297] selected driver: qemu2
	I0920 10:49:22.985818    8602 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:49:22.985825    8602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:22.987915    8602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:49:22.990834    8602 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:49:22.993948    8602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:49:22.993966    8602 cni.go:84] Creating CNI manager for ""
	I0920 10:49:22.993987    8602 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:22.993991    8602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:49:22.994022    8602 start.go:340] cluster config:
	{Name:offline-docker-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:49:22.997971    8602 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:23.004874    8602 out.go:177] * Starting "offline-docker-759000" primary control-plane node in "offline-docker-759000" cluster
	I0920 10:49:23.008737    8602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:49:23.008768    8602 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:23.008777    8602 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:23.008852    8602 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:23.008857    8602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:49:23.008926    8602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/offline-docker-759000/config.json ...
	I0920 10:49:23.008936    8602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/offline-docker-759000/config.json: {Name:mkf3434fb1422b078c4f2b2a4c71d976db3c0a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:23.009259    8602 start.go:360] acquireMachinesLock for offline-docker-759000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:23.009300    8602 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "offline-docker-759000"
	I0920 10:49:23.009316    8602 start.go:93] Provisioning new machine with config: &{Name:offline-docker-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:23.009344    8602 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:23.012975    8602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:23.029023    8602 start.go:159] libmachine.API.Create for "offline-docker-759000" (driver="qemu2")
	I0920 10:49:23.029055    8602 client.go:168] LocalClient.Create starting
	I0920 10:49:23.029125    8602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:23.029154    8602 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:23.029163    8602 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:23.029210    8602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:23.029233    8602 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:23.029244    8602 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:23.029597    8602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:23.211778    8602 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:23.321804    8602 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:23.321814    8602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:23.322013    8602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:23.331579    8602 main.go:141] libmachine: STDOUT: 
	I0920 10:49:23.331603    8602 main.go:141] libmachine: STDERR: 
	I0920 10:49:23.331670    8602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2 +20000M
	I0920 10:49:23.340398    8602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:23.340418    8602 main.go:141] libmachine: STDERR: 
	I0920 10:49:23.340441    8602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:23.340446    8602 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:23.340456    8602 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:23.340485    8602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:df:1b:67:a0:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:23.342357    8602 main.go:141] libmachine: STDOUT: 
	I0920 10:49:23.342370    8602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:23.342391    8602 client.go:171] duration metric: took 313.332292ms to LocalClient.Create
	I0920 10:49:25.344478    8602 start.go:128] duration metric: took 2.335133459s to createHost
	I0920 10:49:25.344549    8602 start.go:83] releasing machines lock for "offline-docker-759000", held for 2.335257292s
	W0920 10:49:25.344568    8602 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:25.358290    8602 out.go:177] * Deleting "offline-docker-759000" in qemu2 ...
	W0920 10:49:25.373089    8602 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:25.373101    8602 start.go:729] Will try again in 5 seconds ...
	I0920 10:49:30.375239    8602 start.go:360] acquireMachinesLock for offline-docker-759000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:30.375715    8602 start.go:364] duration metric: took 387.958µs to acquireMachinesLock for "offline-docker-759000"
	I0920 10:49:30.375874    8602 start.go:93] Provisioning new machine with config: &{Name:offline-docker-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:30.376217    8602 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:30.387902    8602 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:30.438484    8602 start.go:159] libmachine.API.Create for "offline-docker-759000" (driver="qemu2")
	I0920 10:49:30.438540    8602 client.go:168] LocalClient.Create starting
	I0920 10:49:30.438657    8602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:30.438724    8602 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:30.438739    8602 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:30.438807    8602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:30.438851    8602 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:30.438867    8602 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:30.439866    8602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:30.741226    8602 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:30.812950    8602 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:30.812956    8602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:30.813133    8602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:30.822193    8602 main.go:141] libmachine: STDOUT: 
	I0920 10:49:30.822228    8602 main.go:141] libmachine: STDERR: 
	I0920 10:49:30.822286    8602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2 +20000M
	I0920 10:49:30.830021    8602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:30.830039    8602 main.go:141] libmachine: STDERR: 
	I0920 10:49:30.830053    8602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:30.830058    8602 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:30.830067    8602 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:30.830092    8602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:10:95:fb:32:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/offline-docker-759000/disk.qcow2
	I0920 10:49:30.831711    8602 main.go:141] libmachine: STDOUT: 
	I0920 10:49:30.831731    8602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:30.831744    8602 client.go:171] duration metric: took 393.20025ms to LocalClient.Create
	I0920 10:49:32.833914    8602 start.go:128] duration metric: took 2.457679416s to createHost
	I0920 10:49:32.834119    8602 start.go:83] releasing machines lock for "offline-docker-759000", held for 2.458235042s
	W0920 10:49:32.834453    8602 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:32.851872    8602 out.go:201] 
	W0920 10:49:32.855819    8602 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:49:32.855906    8602 out.go:270] * 
	* 
	W0920 10:49:32.858456    8602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:49:32.869742    8602 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-759000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-20 10:49:32.885541 -0700 PDT m=+669.663423126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-759000 -n offline-docker-759000
I0920 10:49:32.945906    7191 install.go:79] stdout: 
W0920 10:49:32.946067    7191 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:49:32.946091    7191 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit]
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-759000 -n offline-docker-759000: exit status 7 (69.356625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-759000
I0920 10:49:32.958301    7191 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit]
I0920 10:49:32.970503    7191 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit]
I0920 10:49:32.981234    7191 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit]
I0920 10:49:32.999482    7191 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:49:32.999612    7191 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- FAIL: TestOffline (10.15s)

                                                
                                    
x
+
TestAddons/Setup (10.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-710000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-710000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.2654765s)

                                                
                                                
-- stdout --
	* [addons-710000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-710000" primary control-plane node in "addons-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:42.621032    7266 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:42.621152    7266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:42.621154    7266 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:42.621157    7266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:42.621289    7266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:38:42.622391    7266 out.go:352] Setting JSON to false
	I0920 10:38:42.638508    7266 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4085,"bootTime":1726849837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:38:42.638575    7266 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:38:42.643331    7266 out.go:177] * [addons-710000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:38:42.650279    7266 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:38:42.650340    7266 notify.go:220] Checking for updates...
	I0920 10:38:42.658253    7266 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:38:42.662291    7266 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:38:42.665210    7266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:38:42.669229    7266 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:38:42.672297    7266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:38:42.675424    7266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:38:42.679219    7266 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:38:42.685164    7266 start.go:297] selected driver: qemu2
	I0920 10:38:42.685171    7266 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:38:42.685186    7266 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:38:42.687528    7266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:38:42.690233    7266 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:38:42.693525    7266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:38:42.693555    7266 cni.go:84] Creating CNI manager for ""
	I0920 10:38:42.693578    7266 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:38:42.693582    7266 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:38:42.693618    7266 start.go:340] cluster config:
	{Name:addons-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:38:42.697333    7266 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:38:42.705293    7266 out.go:177] * Starting "addons-710000" primary control-plane node in "addons-710000" cluster
	I0920 10:38:42.709286    7266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:42.709306    7266 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:38:42.709311    7266 cache.go:56] Caching tarball of preloaded images
	I0920 10:38:42.709377    7266 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:38:42.709385    7266 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:38:42.709604    7266 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/addons-710000/config.json ...
	I0920 10:38:42.709616    7266 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/addons-710000/config.json: {Name:mk0fd096e125066e04d3ba3a89cdff95802c0f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:38:42.710028    7266 start.go:360] acquireMachinesLock for addons-710000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:38:42.710098    7266 start.go:364] duration metric: took 64.084µs to acquireMachinesLock for "addons-710000"
	I0920 10:38:42.710119    7266 start.go:93] Provisioning new machine with config: &{Name:addons-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:38:42.710149    7266 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:38:42.718287    7266 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 10:38:42.735790    7266 start.go:159] libmachine.API.Create for "addons-710000" (driver="qemu2")
	I0920 10:38:42.735829    7266 client.go:168] LocalClient.Create starting
	I0920 10:38:42.735972    7266 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:38:42.878879    7266 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:38:42.921563    7266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:38:43.201503    7266 main.go:141] libmachine: Creating SSH key...
	I0920 10:38:43.332339    7266 main.go:141] libmachine: Creating Disk image...
	I0920 10:38:43.332347    7266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:38:43.332574    7266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:43.342925    7266 main.go:141] libmachine: STDOUT: 
	I0920 10:38:43.342949    7266 main.go:141] libmachine: STDERR: 
	I0920 10:38:43.343003    7266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2 +20000M
	I0920 10:38:43.350879    7266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:38:43.350895    7266 main.go:141] libmachine: STDERR: 
	I0920 10:38:43.350913    7266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:43.350918    7266 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:38:43.350956    7266 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:38:43.350990    7266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4b:4a:50:3e:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:43.352622    7266 main.go:141] libmachine: STDOUT: 
	I0920 10:38:43.352640    7266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:38:43.352660    7266 client.go:171] duration metric: took 616.828ms to LocalClient.Create
	I0920 10:38:45.354884    7266 start.go:128] duration metric: took 2.644726792s to createHost
	I0920 10:38:45.354979    7266 start.go:83] releasing machines lock for "addons-710000", held for 2.644859417s
	W0920 10:38:45.355038    7266 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:38:45.369314    7266 out.go:177] * Deleting "addons-710000" in qemu2 ...
	W0920 10:38:45.401619    7266 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:38:45.401639    7266 start.go:729] Will try again in 5 seconds ...
	I0920 10:38:50.403943    7266 start.go:360] acquireMachinesLock for addons-710000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:38:50.404433    7266 start.go:364] duration metric: took 387.625µs to acquireMachinesLock for "addons-710000"
	I0920 10:38:50.404558    7266 start.go:93] Provisioning new machine with config: &{Name:addons-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:38:50.404832    7266 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:38:50.415316    7266 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 10:38:50.465993    7266 start.go:159] libmachine.API.Create for "addons-710000" (driver="qemu2")
	I0920 10:38:50.466058    7266 client.go:168] LocalClient.Create starting
	I0920 10:38:50.466185    7266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:38:50.466249    7266 main.go:141] libmachine: Decoding PEM data...
	I0920 10:38:50.466268    7266 main.go:141] libmachine: Parsing certificate...
	I0920 10:38:50.466372    7266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:38:50.466429    7266 main.go:141] libmachine: Decoding PEM data...
	I0920 10:38:50.466443    7266 main.go:141] libmachine: Parsing certificate...
	I0920 10:38:50.467017    7266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:38:50.650251    7266 main.go:141] libmachine: Creating SSH key...
	I0920 10:38:50.788124    7266 main.go:141] libmachine: Creating Disk image...
	I0920 10:38:50.788134    7266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:38:50.788327    7266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:50.797540    7266 main.go:141] libmachine: STDOUT: 
	I0920 10:38:50.797563    7266 main.go:141] libmachine: STDERR: 
	I0920 10:38:50.797701    7266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2 +20000M
	I0920 10:38:50.805491    7266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:38:50.805508    7266 main.go:141] libmachine: STDERR: 
	I0920 10:38:50.805531    7266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:50.805538    7266 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:38:50.805579    7266 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:38:50.805609    7266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:a5:86:47:17:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/addons-710000/disk.qcow2
	I0920 10:38:50.807278    7266 main.go:141] libmachine: STDOUT: 
	I0920 10:38:50.807293    7266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:38:50.807307    7266 client.go:171] duration metric: took 341.243584ms to LocalClient.Create
	I0920 10:38:52.809678    7266 start.go:128] duration metric: took 2.404759083s to createHost
	I0920 10:38:52.809779    7266 start.go:83] releasing machines lock for "addons-710000", held for 2.40533225s
	W0920 10:38:52.810252    7266 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:38:52.820799    7266 out.go:201] 
	W0920 10:38:52.831934    7266 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:38:52.831962    7266 out.go:270] * 
	* 
	W0920 10:38:52.834469    7266 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:52.843761    7266 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-710000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.27s)

                                                
                                    
x
+
TestCertOptions (10.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-654000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-654000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.949056792s)

                                                
                                                
-- stdout --
	* [cert-options-654000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-654000" primary control-plane node in "cert-options-654000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-654000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-654000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-654000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-654000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.417834ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-654000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-654000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-654000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-654000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-654000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.096083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-654000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-654000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-654000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-20 10:50:03.474736 -0700 PDT m=+700.252779542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-654000 -n cert-options-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-654000 -n cert-options-654000: exit status 7 (30.834917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-654000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-654000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-654000
--- FAIL: TestCertOptions (10.21s)

                                                
                                    
x
+
TestCertExpiration (195.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.064296209s)

                                                
                                                
-- stdout --
	* [cert-expiration-031000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-031000" primary control-plane node in "cert-expiration-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233906792s)

                                                
                                                
-- stdout --
	* [cert-expiration-031000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-031000" primary control-plane node in "cert-expiration-031000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-031000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-031000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-031000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-031000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-031000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-031000" primary control-plane node in "cert-expiration-031000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-031000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-031000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-031000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-20 10:53:03.555954 -0700 PDT m=+880.334951001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-031000 -n cert-expiration-031000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-031000 -n cert-expiration-031000: exit status 7 (57.783625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-031000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-031000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-031000
--- FAIL: TestCertExpiration (195.44s)

                                                
                                    
x
+
TestDockerFlags (10.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-208000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-208000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.997053667s)

                                                
                                                
-- stdout --
	* [docker-flags-208000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-208000" primary control-plane node in "docker-flags-208000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-208000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:49:43.165419    8789 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:43.165566    8789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:43.165569    8789 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:43.165572    8789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:43.165712    8789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:49:43.166814    8789 out.go:352] Setting JSON to false
	I0920 10:49:43.182747    8789 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4746,"bootTime":1726849837,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:49:43.182843    8789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:43.190364    8789 out.go:177] * [docker-flags-208000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:43.198215    8789 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:49:43.198251    8789 notify.go:220] Checking for updates...
	I0920 10:49:43.207125    8789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:49:43.210149    8789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:43.213206    8789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:43.216135    8789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:49:43.223035    8789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:49:43.226509    8789 config.go:182] Loaded profile config "force-systemd-flag-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:43.226579    8789 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:43.226628    8789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:43.231178    8789 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:49:43.238147    8789 start.go:297] selected driver: qemu2
	I0920 10:49:43.238152    8789 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:49:43.238158    8789 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:43.240572    8789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:49:43.243233    8789 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:49:43.246175    8789 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0920 10:49:43.246197    8789 cni.go:84] Creating CNI manager for ""
	I0920 10:49:43.246221    8789 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:43.246226    8789 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:49:43.246258    8789 start.go:340] cluster config:
	{Name:docker-flags-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:49:43.250351    8789 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:43.257131    8789 out.go:177] * Starting "docker-flags-208000" primary control-plane node in "docker-flags-208000" cluster
	I0920 10:49:43.261070    8789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:49:43.261085    8789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:43.261089    8789 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:43.261166    8789 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:43.261172    8789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:49:43.261230    8789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/docker-flags-208000/config.json ...
	I0920 10:49:43.261249    8789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/docker-flags-208000/config.json: {Name:mke11bcc46c78d160bbc2e3ae6c5d4fb5888d505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:43.261686    8789 start.go:360] acquireMachinesLock for docker-flags-208000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:43.261736    8789 start.go:364] duration metric: took 36.375µs to acquireMachinesLock for "docker-flags-208000"
	I0920 10:49:43.261751    8789 start.go:93] Provisioning new machine with config: &{Name:docker-flags-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:43.261783    8789 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:43.269098    8789 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:43.286697    8789 start.go:159] libmachine.API.Create for "docker-flags-208000" (driver="qemu2")
	I0920 10:49:43.286728    8789 client.go:168] LocalClient.Create starting
	I0920 10:49:43.286790    8789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:43.286820    8789 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:43.286828    8789 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:43.286865    8789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:43.286891    8789 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:43.286897    8789 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:43.287267    8789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:43.453299    8789 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:43.546967    8789 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:43.546972    8789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:43.547160    8789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:43.556611    8789 main.go:141] libmachine: STDOUT: 
	I0920 10:49:43.556625    8789 main.go:141] libmachine: STDERR: 
	I0920 10:49:43.556678    8789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2 +20000M
	I0920 10:49:43.564550    8789 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:43.564563    8789 main.go:141] libmachine: STDERR: 
	I0920 10:49:43.564579    8789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:43.564583    8789 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:43.564597    8789 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:43.564635    8789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d8:53:7b:63:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:43.566316    8789 main.go:141] libmachine: STDOUT: 
	I0920 10:49:43.566331    8789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:43.566349    8789 client.go:171] duration metric: took 279.617125ms to LocalClient.Create
	I0920 10:49:45.568527    8789 start.go:128] duration metric: took 2.306739875s to createHost
	I0920 10:49:45.568634    8789 start.go:83] releasing machines lock for "docker-flags-208000", held for 2.306899125s
	W0920 10:49:45.568697    8789 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:45.588815    8789 out.go:177] * Deleting "docker-flags-208000" in qemu2 ...
	W0920 10:49:45.618812    8789 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:45.618830    8789 start.go:729] Will try again in 5 seconds ...
	I0920 10:49:50.620959    8789 start.go:360] acquireMachinesLock for docker-flags-208000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:50.650865    8789 start.go:364] duration metric: took 29.7625ms to acquireMachinesLock for "docker-flags-208000"
	I0920 10:49:50.650951    8789 start.go:93] Provisioning new machine with config: &{Name:docker-flags-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:50.651178    8789 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:50.669843    8789 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:50.701629    8789 start.go:159] libmachine.API.Create for "docker-flags-208000" (driver="qemu2")
	I0920 10:49:50.701665    8789 client.go:168] LocalClient.Create starting
	I0920 10:49:50.701755    8789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:50.701797    8789 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:50.701807    8789 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:50.701847    8789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:50.701882    8789 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:50.701889    8789 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:50.702194    8789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:50.904038    8789 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:51.048887    8789 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:51.048899    8789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:51.049129    8789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:51.058641    8789 main.go:141] libmachine: STDOUT: 
	I0920 10:49:51.058659    8789 main.go:141] libmachine: STDERR: 
	I0920 10:49:51.058712    8789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2 +20000M
	I0920 10:49:51.066704    8789 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:51.066720    8789 main.go:141] libmachine: STDERR: 
	I0920 10:49:51.066730    8789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:51.066745    8789 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:51.066752    8789 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:51.066792    8789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:ed:37:ac:f4:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/docker-flags-208000/disk.qcow2
	I0920 10:49:51.068452    8789 main.go:141] libmachine: STDOUT: 
	I0920 10:49:51.068464    8789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:51.068477    8789 client.go:171] duration metric: took 366.810375ms to LocalClient.Create
	I0920 10:49:53.070640    8789 start.go:128] duration metric: took 2.419424125s to createHost
	I0920 10:49:53.070710    8789 start.go:83] releasing machines lock for "docker-flags-208000", held for 2.419809292s
	W0920 10:49:53.071075    8789 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:53.086699    8789 out.go:201] 
	W0920 10:49:53.101915    8789 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:49:53.101948    8789 out.go:270] * 
	* 
	W0920 10:49:53.103962    8789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:49:53.119675    8789 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-208000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-208000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-208000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.216541ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-208000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-208000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-208000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-208000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-208000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-208000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.667541ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-208000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-208000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-208000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-208000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-20 10:49:53.264515 -0700 PDT m=+690.042504584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-208000 -n docker-flags-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-208000 -n docker-flags-208000: exit status 7 (28.629333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-208000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-208000
--- FAIL: TestDockerFlags (10.23s)

                                                
                                    
x
+
TestForceSystemdFlag (10.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.276126333s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-239000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-239000" primary control-plane node in "force-systemd-flag-239000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-239000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:49:37.805850    8768 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:37.805970    8768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:37.805974    8768 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:37.805976    8768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:37.806113    8768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:49:37.807171    8768 out.go:352] Setting JSON to false
	I0920 10:49:37.823156    8768 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4740,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:49:37.823227    8768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:37.830124    8768 out.go:177] * [force-systemd-flag-239000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:37.848259    8768 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:49:37.848308    8768 notify.go:220] Checking for updates...
	I0920 10:49:37.858035    8768 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:49:37.860986    8768 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:37.864077    8768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:37.867124    8768 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:49:37.868623    8768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:49:37.872459    8768 config.go:182] Loaded profile config "force-systemd-env-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:37.872539    8768 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:37.872587    8768 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:37.877080    8768 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:49:37.883032    8768 start.go:297] selected driver: qemu2
	I0920 10:49:37.883039    8768 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:49:37.883046    8768 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:37.885324    8768 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:49:37.889093    8768 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:49:37.892166    8768 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:49:37.892182    8768 cni.go:84] Creating CNI manager for ""
	I0920 10:49:37.892212    8768 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:37.892217    8768 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:49:37.892248    8768 start.go:340] cluster config:
	{Name:force-systemd-flag-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:49:37.895766    8768 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:37.903056    8768 out.go:177] * Starting "force-systemd-flag-239000" primary control-plane node in "force-systemd-flag-239000" cluster
	I0920 10:49:37.907093    8768 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:49:37.907119    8768 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:37.907130    8768 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:37.907223    8768 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:37.907229    8768 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:49:37.907297    8768 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/force-systemd-flag-239000/config.json ...
	I0920 10:49:37.907313    8768 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/force-systemd-flag-239000/config.json: {Name:mk45f3cfb9ebae2e4cbab0939f9a303ac0edf446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:37.907547    8768 start.go:360] acquireMachinesLock for force-systemd-flag-239000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:37.907585    8768 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "force-systemd-flag-239000"
	I0920 10:49:37.907600    8768 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:37.907642    8768 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:37.913978    8768 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:37.932407    8768 start.go:159] libmachine.API.Create for "force-systemd-flag-239000" (driver="qemu2")
	I0920 10:49:37.932439    8768 client.go:168] LocalClient.Create starting
	I0920 10:49:37.932509    8768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:37.932543    8768 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:37.932552    8768 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:37.932592    8768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:37.932616    8768 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:37.932624    8768 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:37.933081    8768 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:38.125108    8768 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:38.198259    8768 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:38.198264    8768 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:38.198450    8768 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:38.207634    8768 main.go:141] libmachine: STDOUT: 
	I0920 10:49:38.207648    8768 main.go:141] libmachine: STDERR: 
	I0920 10:49:38.207712    8768 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2 +20000M
	I0920 10:49:38.215433    8768 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:38.215448    8768 main.go:141] libmachine: STDERR: 
	I0920 10:49:38.215472    8768 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:38.215476    8768 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:38.215489    8768 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:38.215516    8768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:71:10:fb:b0:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:38.217054    8768 main.go:141] libmachine: STDOUT: 
	I0920 10:49:38.217069    8768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:38.217088    8768 client.go:171] duration metric: took 284.644708ms to LocalClient.Create
	I0920 10:49:40.219303    8768 start.go:128] duration metric: took 2.311652542s to createHost
	I0920 10:49:40.219361    8768 start.go:83] releasing machines lock for "force-systemd-flag-239000", held for 2.311778291s
	W0920 10:49:40.219435    8768 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:40.232601    8768 out.go:177] * Deleting "force-systemd-flag-239000" in qemu2 ...
	W0920 10:49:40.270222    8768 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:40.270242    8768 start.go:729] Will try again in 5 seconds ...
	I0920 10:49:45.272436    8768 start.go:360] acquireMachinesLock for force-systemd-flag-239000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:45.568766    8768 start.go:364] duration metric: took 296.181708ms to acquireMachinesLock for "force-systemd-flag-239000"
	I0920 10:49:45.568919    8768 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:45.569183    8768 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:45.580973    8768 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:45.629767    8768 start.go:159] libmachine.API.Create for "force-systemd-flag-239000" (driver="qemu2")
	I0920 10:49:45.629814    8768 client.go:168] LocalClient.Create starting
	I0920 10:49:45.629938    8768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:45.630004    8768 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:45.630023    8768 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:45.630088    8768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:45.630132    8768 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:45.630149    8768 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:45.630699    8768 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:45.851501    8768 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:45.974796    8768 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:45.974804    8768 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:45.975004    8768 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:45.984070    8768 main.go:141] libmachine: STDOUT: 
	I0920 10:49:45.984094    8768 main.go:141] libmachine: STDERR: 
	I0920 10:49:45.984156    8768 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2 +20000M
	I0920 10:49:45.992195    8768 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:45.992277    8768 main.go:141] libmachine: STDERR: 
	I0920 10:49:45.992291    8768 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:45.992296    8768 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:45.992305    8768 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:45.992331    8768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a2:90:9e:23:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-flag-239000/disk.qcow2
	I0920 10:49:45.993917    8768 main.go:141] libmachine: STDOUT: 
	I0920 10:49:45.993931    8768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:45.993949    8768 client.go:171] duration metric: took 364.132458ms to LocalClient.Create
	I0920 10:49:47.996184    8768 start.go:128] duration metric: took 2.426962334s to createHost
	I0920 10:49:47.996250    8768 start.go:83] releasing machines lock for "force-systemd-flag-239000", held for 2.427466375s
	W0920 10:49:47.996606    8768 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:48.019314    8768 out.go:201] 
	W0920 10:49:48.028226    8768 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:49:48.028261    8768 out.go:270] * 
	* 
	W0920 10:49:48.030258    8768 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:49:48.039953    8768 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-239000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-239000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.489625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-239000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-239000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-20 10:49:48.142078 -0700 PDT m=+684.920040376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-239000 -n force-systemd-flag-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-239000 -n force-systemd-flag-239000: exit status 7 (33.744375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-239000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-239000
--- FAIL: TestForceSystemdFlag (10.48s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-969000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0920 10:49:34.750796    7191 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0920 10:49:34.750816    7191 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0920 10:49:34.750867    7191 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:49:34.750907    7191 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit
I0920 10:49:35.167785    7191 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40] Decompressors:map[bz2:0x14000131840 gz:0x14000131848 tar:0x140001317f0 tar.bz2:0x14000131800 tar.gz:0x14000131810 tar.xz:0x14000131820 tar.zst:0x14000131830 tbz2:0x14000131800 tgz:0x14000131810 txz:0x14000131820 tzst:0x14000131830 xz:0x14000131850 zip:0x14000131860 zst:0x14000131858] Getters:map[file:0x140013d1d60 http:0x14000142d20 https:0x14000142d70] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:49:35.167907    7191 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit
I0920 10:49:37.736960    7191 install.go:79] stdout: 
W0920 10:49:37.737144    7191 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:49:37.737165    7191 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit]
I0920 10:49:37.750516    7191 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit]
I0920 10:49:37.760893    7191 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit]
I0920 10:49:37.769260    7191 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-969000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.908166292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-969000" primary control-plane node in "force-systemd-env-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:49:33.069038    8748 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:33.069181    8748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:33.069189    8748 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:33.069191    8748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:33.069336    8748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:49:33.070506    8748 out.go:352] Setting JSON to false
	I0920 10:49:33.088284    8748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4736,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:49:33.088376    8748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:33.096006    8748 out.go:177] * [force-systemd-env-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:33.103945    8748 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:49:33.103975    8748 notify.go:220] Checking for updates...
	I0920 10:49:33.111008    8748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:49:33.114034    8748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:33.116998    8748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:33.119975    8748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:49:33.122881    8748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0920 10:49:33.126331    8748 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:49:33.126385    8748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:33.131009    8748 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:49:33.137962    8748 start.go:297] selected driver: qemu2
	I0920 10:49:33.137969    8748 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:49:33.137974    8748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:33.140131    8748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:49:33.143023    8748 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:49:33.144254    8748 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:49:33.144267    8748 cni.go:84] Creating CNI manager for ""
	I0920 10:49:33.144287    8748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:33.144291    8748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:49:33.144325    8748 start.go:340] cluster config:
	{Name:force-systemd-env-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:49:33.147660    8748 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:33.154999    8748 out.go:177] * Starting "force-systemd-env-969000" primary control-plane node in "force-systemd-env-969000" cluster
	I0920 10:49:33.158990    8748 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:49:33.159009    8748 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:33.159021    8748 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:33.159088    8748 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:33.159094    8748 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:49:33.159151    8748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/force-systemd-env-969000/config.json ...
	I0920 10:49:33.159161    8748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/force-systemd-env-969000/config.json: {Name:mk1a71809000e413ea7190517ae1863aac739af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:33.159377    8748 start.go:360] acquireMachinesLock for force-systemd-env-969000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:33.159408    8748 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "force-systemd-env-969000"
	I0920 10:49:33.159419    8748 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:33.159446    8748 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:33.166967    8748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:33.182641    8748 start.go:159] libmachine.API.Create for "force-systemd-env-969000" (driver="qemu2")
	I0920 10:49:33.182671    8748 client.go:168] LocalClient.Create starting
	I0920 10:49:33.182733    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:33.182764    8748 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:33.182774    8748 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:33.182811    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:33.182834    8748 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:33.182841    8748 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:33.183198    8748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:33.348391    8748 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:33.428769    8748 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:33.428781    8748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:33.429011    8748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:33.438594    8748 main.go:141] libmachine: STDOUT: 
	I0920 10:49:33.438611    8748 main.go:141] libmachine: STDERR: 
	I0920 10:49:33.438674    8748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2 +20000M
	I0920 10:49:33.446802    8748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:33.446824    8748 main.go:141] libmachine: STDERR: 
	I0920 10:49:33.446836    8748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:33.446842    8748 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:33.446856    8748 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:33.446882    8748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:63:ad:dc:55:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:33.448544    8748 main.go:141] libmachine: STDOUT: 
	I0920 10:49:33.448558    8748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:33.448580    8748 client.go:171] duration metric: took 265.9045ms to LocalClient.Create
	I0920 10:49:35.450787    8748 start.go:128] duration metric: took 2.291323125s to createHost
	I0920 10:49:35.450875    8748 start.go:83] releasing machines lock for "force-systemd-env-969000", held for 2.291469084s
	W0920 10:49:35.450961    8748 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:35.469292    8748 out.go:177] * Deleting "force-systemd-env-969000" in qemu2 ...
	W0920 10:49:35.501330    8748 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:35.501356    8748 start.go:729] Will try again in 5 seconds ...
	I0920 10:49:40.503489    8748 start.go:360] acquireMachinesLock for force-systemd-env-969000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:40.504002    8748 start.go:364] duration metric: took 419.166µs to acquireMachinesLock for "force-systemd-env-969000"
	I0920 10:49:40.504126    8748 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:49:40.504422    8748 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:49:40.524745    8748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:49:40.576422    8748 start.go:159] libmachine.API.Create for "force-systemd-env-969000" (driver="qemu2")
	I0920 10:49:40.576471    8748 client.go:168] LocalClient.Create starting
	I0920 10:49:40.576577    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:49:40.576649    8748 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:40.576662    8748 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:40.576728    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:49:40.576772    8748 main.go:141] libmachine: Decoding PEM data...
	I0920 10:49:40.576782    8748 main.go:141] libmachine: Parsing certificate...
	I0920 10:49:40.577475    8748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:49:40.752822    8748 main.go:141] libmachine: Creating SSH key...
	I0920 10:49:40.877111    8748 main.go:141] libmachine: Creating Disk image...
	I0920 10:49:40.877116    8748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:49:40.877304    8748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:40.886564    8748 main.go:141] libmachine: STDOUT: 
	I0920 10:49:40.886587    8748 main.go:141] libmachine: STDERR: 
	I0920 10:49:40.886644    8748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2 +20000M
	I0920 10:49:40.894431    8748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:49:40.894445    8748 main.go:141] libmachine: STDERR: 
	I0920 10:49:40.894457    8748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:40.894461    8748 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:49:40.894469    8748 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:40.894502    8748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:1a:e6:7a:ff:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/force-systemd-env-969000/disk.qcow2
	I0920 10:49:40.896072    8748 main.go:141] libmachine: STDOUT: 
	I0920 10:49:40.896085    8748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:49:40.896098    8748 client.go:171] duration metric: took 319.624083ms to LocalClient.Create
	I0920 10:49:42.898368    8748 start.go:128] duration metric: took 2.393920291s to createHost
	I0920 10:49:42.898443    8748 start.go:83] releasing machines lock for "force-systemd-env-969000", held for 2.394428875s
	W0920 10:49:42.898788    8748 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:42.908418    8748 out.go:201] 
	W0920 10:49:42.916396    8748 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:49:42.916421    8748 out.go:270] * 
	* 
	W0920 10:49:42.919065    8748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:49:42.930331    8748 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-969000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-969000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-969000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.881708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-969000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-969000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-20 10:49:43.02523 -0700 PDT m=+679.803165709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-969000 -n force-systemd-env-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-969000 -n force-systemd-env-969000: exit status 7 (33.440292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-969000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestErrorSpam/setup (9.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-531000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-531000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 --driver=qemu2 : exit status 80 (9.955665542s)

                                                
                                                
-- stdout --
	* [nospam-531000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-531000" primary control-plane node in "nospam-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-531000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-531000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-531000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19678
- KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-531000" primary control-plane node in "nospam-531000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-531000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.96s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.860011917s)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-693000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19678
- KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-693000" primary control-plane node in "functional-693000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-693000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (69.853916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 10:39:22.596087    7191 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --alsologtostderr -v=8: exit status 80 (5.183620792s)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	* Restarting existing qemu2 VM for "functional-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:22.626568    7406 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:22.626720    7406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:22.626723    7406 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:22.626726    7406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:22.626849    7406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:39:22.627893    7406 out.go:352] Setting JSON to false
	I0920 10:39:22.644015    7406 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4125,"bootTime":1726849837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:39:22.644089    7406 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:39:22.649446    7406 out.go:177] * [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:39:22.656454    7406 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:39:22.656507    7406 notify.go:220] Checking for updates...
	I0920 10:39:22.664435    7406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:39:22.668510    7406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:39:22.671392    7406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:39:22.674466    7406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:39:22.677450    7406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:39:22.680647    7406 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:22.680698    7406 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:39:22.685439    7406 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:39:22.692448    7406 start.go:297] selected driver: qemu2
	I0920 10:39:22.692456    7406 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:22.692566    7406 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:39:22.694985    7406 cni.go:84] Creating CNI manager for ""
	I0920 10:39:22.695028    7406 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:39:22.695071    7406 start.go:340] cluster config:
	{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:22.698647    7406 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:39:22.705455    7406 out.go:177] * Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	I0920 10:39:22.709382    7406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:39:22.709397    7406 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:39:22.709403    7406 cache.go:56] Caching tarball of preloaded images
	I0920 10:39:22.709456    7406 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:39:22.709461    7406 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:39:22.709517    7406 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/functional-693000/config.json ...
	I0920 10:39:22.710059    7406 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:22.710085    7406 start.go:364] duration metric: took 20.541µs to acquireMachinesLock for "functional-693000"
	I0920 10:39:22.710094    7406 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:22.710098    7406 fix.go:54] fixHost starting: 
	I0920 10:39:22.710215    7406 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
	W0920 10:39:22.710223    7406 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:22.718515    7406 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
	I0920 10:39:22.722437    7406 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:22.722477    7406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
	I0920 10:39:22.724521    7406 main.go:141] libmachine: STDOUT: 
	I0920 10:39:22.724537    7406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:22.724572    7406 fix.go:56] duration metric: took 14.471583ms for fixHost
	I0920 10:39:22.724576    7406 start.go:83] releasing machines lock for "functional-693000", held for 14.487709ms
	W0920 10:39:22.724583    7406 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:22.724628    7406 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:22.724636    7406 start.go:729] Will try again in 5 seconds ...
	I0920 10:39:27.726743    7406 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:27.727187    7406 start.go:364] duration metric: took 362.167µs to acquireMachinesLock for "functional-693000"
	I0920 10:39:27.727302    7406 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:27.727326    7406 fix.go:54] fixHost starting: 
	I0920 10:39:27.728026    7406 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
	W0920 10:39:27.728058    7406 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:27.732558    7406 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
	I0920 10:39:27.736492    7406 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:27.736658    7406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
	I0920 10:39:27.745726    7406 main.go:141] libmachine: STDOUT: 
	I0920 10:39:27.745797    7406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:27.745881    7406 fix.go:56] duration metric: took 18.559709ms for fixHost
	I0920 10:39:27.745910    7406 start.go:83] releasing machines lock for "functional-693000", held for 18.694ms
	W0920 10:39:27.746097    7406 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:27.752500    7406 out.go:201] 
	W0920 10:39:27.755614    7406 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:27.755637    7406 out.go:270] * 
	* 
	W0920 10:39:27.758624    7406 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:39:27.766445    7406 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-693000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.18551725s for "functional-693000" cluster.
I0920 10:39:27.781816    7191 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (70.398209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.519125ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-693000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (31.292875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-693000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-693000 get po -A: exit status 1 (25.908625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-693000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-693000\n"*: args "kubectl --context functional-693000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-693000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.750584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl images: exit status 83 (43.885458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.685583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-693000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.9515ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.940625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 kubectl -- --context functional-693000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 kubectl -- --context functional-693000 get pods: exit status 1 (2.161401s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-693000
	* no server found for cluster "functional-693000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-693000 kubectl -- --context functional-693000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (31.980959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-693000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-693000 get pods: exit status 1 (1.009355709s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-693000
	* no server found for cluster "functional-693000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-693000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.641292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.187175167s)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	* Restarting existing qemu2 VM for "functional-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-693000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.187691834s for "functional-693000" cluster.
I0920 10:39:39.572040    7191 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (73.20525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-693000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-693000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.343375ms)

                                                
                                                
** stderr ** 
	error: context "functional-693000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-693000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.188833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 logs: exit status 83 (78.669417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | -p download-only-195000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| start   | -o=json --download-only                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | -p download-only-177000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| start   | --download-only -p                                                       | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | binary-mirror-158000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51061                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-158000                                                  | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| addons  | disable dashboard -p                                                     | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | addons-710000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | addons-710000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-710000 --wait=true                                             | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-710000                                                         | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| start   | -p nospam-531000 -n=1 --memory=2250 --wait=false                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-531000                                                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
	| cache   | functional-693000 cache delete                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	| ssh     | functional-693000 ssh sudo                                               | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-693000                                                        | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-693000 cache reload                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-693000 kubectl --                                             | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | --context functional-693000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:39:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:39:34.412007    7485 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:34.412146    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:34.412148    7485 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:34.412149    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:34.412271    7485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:39:34.413309    7485 out.go:352] Setting JSON to false
	I0920 10:39:34.429174    7485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4137,"bootTime":1726849837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:39:34.429236    7485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:39:34.438291    7485 out.go:177] * [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:39:34.447256    7485 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:39:34.447296    7485 notify.go:220] Checking for updates...
	I0920 10:39:34.457177    7485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:39:34.461221    7485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:39:34.462508    7485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:39:34.465152    7485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:39:34.468224    7485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:39:34.471550    7485 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:34.471614    7485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:39:34.476224    7485 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:39:34.483211    7485 start.go:297] selected driver: qemu2
	I0920 10:39:34.483216    7485 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:34.483271    7485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:39:34.485669    7485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:39:34.485694    7485 cni.go:84] Creating CNI manager for ""
	I0920 10:39:34.485724    7485 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:39:34.485766    7485 start.go:340] cluster config:
	{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:34.489433    7485 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:39:34.496203    7485 out.go:177] * Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	I0920 10:39:34.500142    7485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:39:34.500157    7485 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:39:34.500160    7485 cache.go:56] Caching tarball of preloaded images
	I0920 10:39:34.500214    7485 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:39:34.500218    7485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:39:34.500276    7485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/functional-693000/config.json ...
	I0920 10:39:34.500707    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:34.500744    7485 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "functional-693000"
	I0920 10:39:34.500753    7485 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:34.500755    7485 fix.go:54] fixHost starting: 
	I0920 10:39:34.500891    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
	W0920 10:39:34.500898    7485 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:34.507285    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
	I0920 10:39:34.511159    7485 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:34.511197    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
	I0920 10:39:34.513203    7485 main.go:141] libmachine: STDOUT: 
	I0920 10:39:34.513215    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:34.513247    7485 fix.go:56] duration metric: took 12.489542ms for fixHost
	I0920 10:39:34.513251    7485 start.go:83] releasing machines lock for "functional-693000", held for 12.503875ms
	W0920 10:39:34.513257    7485 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:34.513284    7485 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:34.513289    7485 start.go:729] Will try again in 5 seconds ...
	I0920 10:39:39.515433    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:39.515853    7485 start.go:364] duration metric: took 354.166µs to acquireMachinesLock for "functional-693000"
	I0920 10:39:39.516041    7485 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:39.516095    7485 fix.go:54] fixHost starting: 
	I0920 10:39:39.516820    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
	W0920 10:39:39.516840    7485 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:39.525205    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
	I0920 10:39:39.527144    7485 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:39.527336    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
	I0920 10:39:39.536987    7485 main.go:141] libmachine: STDOUT: 
	I0920 10:39:39.537058    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:39.537161    7485 fix.go:56] duration metric: took 21.105542ms for fixHost
	I0920 10:39:39.537177    7485 start.go:83] releasing machines lock for "functional-693000", held for 21.287417ms
	W0920 10:39:39.537387    7485 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:39.545243    7485 out.go:201] 
	W0920 10:39:39.549267    7485 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:39.549314    7485 out.go:270] * 
	W0920 10:39:39.551650    7485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:39:39.559165    7485 out.go:201] 
	
	
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-693000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | -p download-only-195000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | -o=json --download-only                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | -p download-only-177000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | --download-only -p                                                       | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | binary-mirror-158000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51061                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-158000                                                  | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| addons  | disable dashboard -p                                                     | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | addons-710000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | addons-710000                                                            |                      |         |         |                     |                     |
| start   | -p addons-710000 --wait=true                                             | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-710000                                                         | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | -p nospam-531000 -n=1 --memory=2250 --wait=false                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-531000                                                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
| cache   | functional-693000 cache delete                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| ssh     | functional-693000 ssh sudo                                               | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-693000                                                        | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-693000 cache reload                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-693000 kubectl --                                             | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --context functional-693000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/20 10:39:34
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 10:39:34.412007    7485 out.go:345] Setting OutFile to fd 1 ...
I0920 10:39:34.412146    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:34.412148    7485 out.go:358] Setting ErrFile to fd 2...
I0920 10:39:34.412149    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:34.412271    7485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:39:34.413309    7485 out.go:352] Setting JSON to false
I0920 10:39:34.429174    7485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4137,"bootTime":1726849837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0920 10:39:34.429236    7485 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0920 10:39:34.438291    7485 out.go:177] * [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0920 10:39:34.447256    7485 out.go:177]   - MINIKUBE_LOCATION=19678
I0920 10:39:34.447296    7485 notify.go:220] Checking for updates...
I0920 10:39:34.457177    7485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
I0920 10:39:34.461221    7485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0920 10:39:34.462508    7485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 10:39:34.465152    7485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
I0920 10:39:34.468224    7485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0920 10:39:34.471550    7485 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:39:34.471614    7485 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 10:39:34.476224    7485 out.go:177] * Using the qemu2 driver based on existing profile
I0920 10:39:34.483211    7485 start.go:297] selected driver: qemu2
I0920 10:39:34.483216    7485 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:39:34.483271    7485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 10:39:34.485669    7485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 10:39:34.485694    7485 cni.go:84] Creating CNI manager for ""
I0920 10:39:34.485724    7485 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 10:39:34.485766    7485 start.go:340] cluster config:
{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:39:34.489433    7485 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:39:34.496203    7485 out.go:177] * Starting "functional-693000" primary control-plane node in "functional-693000" cluster
I0920 10:39:34.500142    7485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:39:34.500157    7485 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0920 10:39:34.500160    7485 cache.go:56] Caching tarball of preloaded images
I0920 10:39:34.500214    7485 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0920 10:39:34.500218    7485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 10:39:34.500276    7485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/functional-693000/config.json ...
I0920 10:39:34.500707    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:39:34.500744    7485 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "functional-693000"
I0920 10:39:34.500753    7485 start.go:96] Skipping create...Using existing machine configuration
I0920 10:39:34.500755    7485 fix.go:54] fixHost starting: 
I0920 10:39:34.500891    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
W0920 10:39:34.500898    7485 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:39:34.507285    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
I0920 10:39:34.511159    7485 qemu.go:418] Using hvf for hardware acceleration
I0920 10:39:34.511197    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
I0920 10:39:34.513203    7485 main.go:141] libmachine: STDOUT: 
I0920 10:39:34.513215    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:39:34.513247    7485 fix.go:56] duration metric: took 12.489542ms for fixHost
I0920 10:39:34.513251    7485 start.go:83] releasing machines lock for "functional-693000", held for 12.503875ms
W0920 10:39:34.513257    7485 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:39:34.513284    7485 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:39:34.513289    7485 start.go:729] Will try again in 5 seconds ...
I0920 10:39:39.515433    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:39:39.515853    7485 start.go:364] duration metric: took 354.166µs to acquireMachinesLock for "functional-693000"
I0920 10:39:39.516041    7485 start.go:96] Skipping create...Using existing machine configuration
I0920 10:39:39.516095    7485 fix.go:54] fixHost starting: 
I0920 10:39:39.516820    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
W0920 10:39:39.516840    7485 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:39:39.525205    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
I0920 10:39:39.527144    7485 qemu.go:418] Using hvf for hardware acceleration
I0920 10:39:39.527336    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
I0920 10:39:39.536987    7485 main.go:141] libmachine: STDOUT: 
I0920 10:39:39.537058    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:39:39.537161    7485 fix.go:56] duration metric: took 21.105542ms for fixHost
I0920 10:39:39.537177    7485 start.go:83] releasing machines lock for "functional-693000", held for 21.287417ms
W0920 10:39:39.537387    7485 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:39:39.545243    7485 out.go:201] 
W0920 10:39:39.549267    7485 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:39:39.549314    7485 out.go:270] * 
W0920 10:39:39.551650    7485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:39:39.559165    7485 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd724925180/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | -p download-only-195000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | -o=json --download-only                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | -p download-only-177000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-195000                                                  | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| delete  | -p download-only-177000                                                  | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | --download-only -p                                                       | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | binary-mirror-158000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51061                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-158000                                                  | binary-mirror-158000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| addons  | disable dashboard -p                                                     | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | addons-710000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | addons-710000                                                            |                      |         |         |                     |                     |
| start   | -p addons-710000 --wait=true                                             | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-710000                                                         | addons-710000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
| start   | -p nospam-531000 -n=1 --memory=2250 --wait=false                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-531000 --log_dir                                                  | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-531000                                                         | nospam-531000        | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-693000 cache add                                              | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
| cache   | functional-693000 cache delete                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | minikube-local-cache-test:functional-693000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| ssh     | functional-693000 ssh sudo                                               | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-693000                                                        | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-693000 cache reload                                           | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
| ssh     | functional-693000 ssh                                                    | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT | 20 Sep 24 10:39 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-693000 kubectl --                                             | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --context functional-693000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-693000                                                     | functional-693000    | jenkins | v1.34.0 | 20 Sep 24 10:39 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/20 10:39:34
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 10:39:34.412007    7485 out.go:345] Setting OutFile to fd 1 ...
I0920 10:39:34.412146    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:34.412148    7485 out.go:358] Setting ErrFile to fd 2...
I0920 10:39:34.412149    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:34.412271    7485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:39:34.413309    7485 out.go:352] Setting JSON to false
I0920 10:39:34.429174    7485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4137,"bootTime":1726849837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0920 10:39:34.429236    7485 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0920 10:39:34.438291    7485 out.go:177] * [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0920 10:39:34.447256    7485 out.go:177]   - MINIKUBE_LOCATION=19678
I0920 10:39:34.447296    7485 notify.go:220] Checking for updates...
I0920 10:39:34.457177    7485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
I0920 10:39:34.461221    7485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0920 10:39:34.462508    7485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 10:39:34.465152    7485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
I0920 10:39:34.468224    7485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0920 10:39:34.471550    7485 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:39:34.471614    7485 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 10:39:34.476224    7485 out.go:177] * Using the qemu2 driver based on existing profile
I0920 10:39:34.483211    7485 start.go:297] selected driver: qemu2
I0920 10:39:34.483216    7485 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:39:34.483271    7485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 10:39:34.485669    7485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 10:39:34.485694    7485 cni.go:84] Creating CNI manager for ""
I0920 10:39:34.485724    7485 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 10:39:34.485766    7485 start.go:340] cluster config:
{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:39:34.489433    7485 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:39:34.496203    7485 out.go:177] * Starting "functional-693000" primary control-plane node in "functional-693000" cluster
I0920 10:39:34.500142    7485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:39:34.500157    7485 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0920 10:39:34.500160    7485 cache.go:56] Caching tarball of preloaded images
I0920 10:39:34.500214    7485 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0920 10:39:34.500218    7485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 10:39:34.500276    7485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/functional-693000/config.json ...
I0920 10:39:34.500707    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:39:34.500744    7485 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "functional-693000"
I0920 10:39:34.500753    7485 start.go:96] Skipping create...Using existing machine configuration
I0920 10:39:34.500755    7485 fix.go:54] fixHost starting: 
I0920 10:39:34.500891    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
W0920 10:39:34.500898    7485 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:39:34.507285    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
I0920 10:39:34.511159    7485 qemu.go:418] Using hvf for hardware acceleration
I0920 10:39:34.511197    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
I0920 10:39:34.513203    7485 main.go:141] libmachine: STDOUT: 
I0920 10:39:34.513215    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:39:34.513247    7485 fix.go:56] duration metric: took 12.489542ms for fixHost
I0920 10:39:34.513251    7485 start.go:83] releasing machines lock for "functional-693000", held for 12.503875ms
W0920 10:39:34.513257    7485 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:39:34.513284    7485 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:39:34.513289    7485 start.go:729] Will try again in 5 seconds ...
I0920 10:39:39.515433    7485 start.go:360] acquireMachinesLock for functional-693000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:39:39.515853    7485 start.go:364] duration metric: took 354.166µs to acquireMachinesLock for "functional-693000"
I0920 10:39:39.516041    7485 start.go:96] Skipping create...Using existing machine configuration
I0920 10:39:39.516095    7485 fix.go:54] fixHost starting: 
I0920 10:39:39.516820    7485 fix.go:112] recreateIfNeeded on functional-693000: state=Stopped err=<nil>
W0920 10:39:39.516840    7485 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:39:39.525205    7485 out.go:177] * Restarting existing qemu2 VM for "functional-693000" ...
I0920 10:39:39.527144    7485 qemu.go:418] Using hvf for hardware acceleration
I0920 10:39:39.527336    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:af:bb:19:af:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/functional-693000/disk.qcow2
I0920 10:39:39.536987    7485 main.go:141] libmachine: STDOUT: 
I0920 10:39:39.537058    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:39:39.537161    7485 fix.go:56] duration metric: took 21.105542ms for fixHost
I0920 10:39:39.537177    7485 start.go:83] releasing machines lock for "functional-693000", held for 21.287417ms
W0920 10:39:39.537387    7485 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:39:39.545243    7485 out.go:201] 
W0920 10:39:39.549267    7485 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:39:39.549314    7485 out.go:270] * 
W0920 10:39:39.551650    7485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:39:39.559165    7485 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-693000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.178625ms)

                                                
                                                
** stderr ** 
	error: context "functional-693000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-693000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1] stderr:
I0920 10:40:27.335041    7799 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.335436    7799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.335440    7799 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.335442    7799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.335596    7799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.335805    7799 mustload.go:65] Loading cluster: functional-693000
I0920 10:40:27.336021    7799 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.339475    7799 out.go:177] * The control-plane node functional-693000 host is not running: state=Stopped
I0920 10:40:27.343230    7799 out.go:177]   To start a cluster, run: "minikube start -p functional-693000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (42.21025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 status: exit status 7 (30.43625ms)

                                                
                                                
-- stdout --
	functional-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-693000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.744625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-693000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 status -o json: exit status 7 (29.086708ms)

                                                
                                                
-- stdout --
	{"Name":"functional-693000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-693000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.1545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-693000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-693000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.154292ms)

                                                
                                                
** stderr ** 
	error: context "functional-693000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-693000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-693000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-693000 describe po hello-node-connect: exit status 1 (26.386583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-693000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-693000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-693000 logs -l app=hello-node-connect: exit status 1 (26.503125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-693000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-693000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-693000 describe svc hello-node-connect: exit status 1 (26.336334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-693000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.815458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-693000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (29.8465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "echo hello": exit status 83 (40.8385ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n"*. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "cat /etc/hostname": exit status 83 (46.499125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-693000"- but got *"* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n"*. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (31.551667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.906ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.725334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-693000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-693000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp functional-693000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4187086287/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 cp functional-693000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4187086287/001/cp-test.txt: exit status 83 (50.655458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 cp functional-693000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4187086287/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.861ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4187086287/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.739209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.949875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-693000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-693000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7191/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/test/nested/copy/7191/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/test/nested/copy/7191/hosts": exit status 83 (39.63825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/test/nested/copy/7191/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-693000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-693000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (31.743125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7191.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/7191.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/7191.pem": exit status 83 (40.854541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7191.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /etc/ssl/certs/7191.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7191.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7191.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/7191.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/7191.pem": exit status 83 (41.764125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7191.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /usr/share/ca-certificates/7191.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7191.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.492667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/71912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/71912.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/71912.pem": exit status 83 (40.760458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/71912.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /etc/ssl/certs/71912.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/71912.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/71912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/71912.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/71912.pem": exit status 83 (40.769625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/71912.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /usr/share/ca-certificates/71912.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/71912.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.76ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-693000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-693000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (30.594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-693000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-693000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.051667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-693000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000: exit status 7 (31.956584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo systemctl is-active crio": exit status 83 (45.271792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 version -o=json --components: exit status 83 (41.742667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr:
I0920 10:40:27.741303    7814 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.741472    7814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.741476    7814 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.741478    7814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.741609    7814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.742045    7814 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.742108    7814 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr:
I0920 10:40:27.814382    7818 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.814525    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.814529    7818 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.814531    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.814650    7818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.815124    7818 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.815182    7818 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr:
I0920 10:40:27.777535    7816 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.777672    7816 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.777675    7816 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.777678    7816 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.777843    7816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.778274    7816 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.778334    7816 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr:
I0920 10:40:27.851285    7820 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.851435    7820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.851438    7820 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.851440    7820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.851553    7820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.851967    7820 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.852034    7820 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh pgrep buildkitd: exit status 83 (41.891958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image build -t localhost/my-image:functional-693000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image build -t localhost/my-image:functional-693000 testdata/build --alsologtostderr:
I0920 10:40:27.928517    7824 out.go:345] Setting OutFile to fd 1 ...
I0920 10:40:27.928949    7824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.928953    7824 out.go:358] Setting ErrFile to fd 2...
I0920 10:40:27.928956    7824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:40:27.929114    7824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:40:27.929539    7824 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.930018    7824 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:40:27.930245    7824 build_images.go:133] succeeded building to: 
I0920 10:40:27.930248    7824 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
functional_test.go:446: expected "localhost/my-image:functional-693000" to be loaded into minikube but the image is not there
I0920 10:40:48.869782    7191 retry.go:31] will retry after 17.268600485s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-693000 docker-env) && out/minikube-darwin-arm64 status -p functional-693000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-693000 docker-env) && out/minikube-darwin-arm64 status -p functional-693000": exit status 1 (43.19075ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2: exit status 83 (42.713625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:27.611585    7808 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:27.612709    7808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.612718    7808 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:27.612721    7808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.612855    7808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:40:27.613073    7808 mustload.go:65] Loading cluster: functional-693000
	I0920 10:40:27.613313    7808 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:40:27.617218    7808 out.go:177] * The control-plane node functional-693000 host is not running: state=Stopped
	I0920 10:40:27.621128    7808 out.go:177]   To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2: exit status 83 (43.729084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:27.699022    7812 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:27.699168    7812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.699171    7812 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:27.699174    7812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.699310    7812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:40:27.699596    7812 mustload.go:65] Loading cluster: functional-693000
	I0920 10:40:27.699793    7812 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:40:27.703261    7812 out.go:177] * The control-plane node functional-693000 host is not running: state=Stopped
	I0920 10:40:27.707084    7812 out.go:177]   To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2: exit status 83 (41.568208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:27.655320    7810 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:27.655467    7810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.655470    7810 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:27.655472    7810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.655609    7810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:40:27.655840    7810 mustload.go:65] Loading cluster: functional-693000
	I0920 10:40:27.656040    7810 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:40:27.659202    7810 out.go:177] * The control-plane node functional-693000 host is not running: state=Stopped
	I0920 10:40:27.663182    7810 out.go:177]   To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-693000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-693000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.492667ms)

                                                
                                                
** stderr ** 
	error: context "functional-693000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-693000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 service list: exit status 83 (43.8935ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-693000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 service list -o json: exit status 83 (42.882542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-693000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 service --namespace=default --https --url hello-node: exit status 83 (46.568333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-693000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 service hello-node --url --format={{.IP}}: exit status 83 (43.319042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-693000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 service hello-node --url: exit status 83 (47.668625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-693000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test.go:1569: failed to parse "* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"": parse "* The control-plane node functional-693000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-693000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0920 10:39:41.377195    7602 out.go:345] Setting OutFile to fd 1 ...
I0920 10:39:41.377363    7602 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:41.377366    7602 out.go:358] Setting ErrFile to fd 2...
I0920 10:39:41.377369    7602 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:39:41.377504    7602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:39:41.377726    7602 mustload.go:65] Loading cluster: functional-693000
I0920 10:39:41.377934    7602 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:39:41.382041    7602 out.go:177] * The control-plane node functional-693000 host is not running: state=Stopped
I0920 10:39:41.394915    7602 out.go:177]   To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
stdout: * The control-plane node functional-693000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-693000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7603: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-693000": client config: context "functional-693000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (84.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0920 10:39:41.447664    7191 retry.go:31] will retry after 3.137671575s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-693000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-693000 get svc nginx-svc: exit status 1 (70.513083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-693000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-693000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (84.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-693000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-693000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-693000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-693000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image save kicbase/echo-server:functional-693000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-693000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0920 10:41:06.225010    7191 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.026200417s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 17 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0920 10:41:31.348649    7191 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:41:31.349806    7191 retry.go:31] will retry after 3.310714125s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:64645->10.96.0.10:53: write: no route to host
I0920 10:41:34.664322    7191 retry.go:31] will retry after 4.2970961s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:63877->10.96.0.10:53: write: no route to host
I0920 10:41:38.965401    7191 retry.go:31] will retry after 7.726464952s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:58851->10.96.0.10:53: write: no route to host
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-763000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-763000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.799063709s)

                                                
                                                
-- stdout --
	* [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-763000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:41:57.081829    7862 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:41:57.081978    7862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:41:57.081981    7862 out.go:358] Setting ErrFile to fd 2...
	I0920 10:41:57.081983    7862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:41:57.082121    7862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:41:57.083246    7862 out.go:352] Setting JSON to false
	I0920 10:41:57.099297    7862 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4280,"bootTime":1726849837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:41:57.099366    7862 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:41:57.105663    7862 out.go:177] * [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:41:57.113784    7862 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:41:57.113851    7862 notify.go:220] Checking for updates...
	I0920 10:41:57.119821    7862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:41:57.122837    7862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:41:57.125781    7862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:41:57.128756    7862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:41:57.131785    7862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:41:57.133379    7862 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:41:57.137800    7862 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:41:57.144588    7862 start.go:297] selected driver: qemu2
	I0920 10:41:57.144594    7862 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:41:57.144600    7862 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:41:57.146816    7862 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:41:57.149835    7862 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:41:57.152904    7862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:41:57.152927    7862 cni.go:84] Creating CNI manager for ""
	I0920 10:41:57.152949    7862 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 10:41:57.152953    7862 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:41:57.152981    7862 start.go:340] cluster config:
	{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:41:57.156684    7862 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:41:57.163755    7862 out.go:177] * Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	I0920 10:41:57.167839    7862 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:41:57.167856    7862 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:41:57.167865    7862 cache.go:56] Caching tarball of preloaded images
	I0920 10:41:57.167956    7862 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:41:57.167963    7862 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:41:57.168172    7862 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/ha-763000/config.json ...
	I0920 10:41:57.168184    7862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/ha-763000/config.json: {Name:mkf92cb0f46e49491307c188d950ad1e08082be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:41:57.168412    7862 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:41:57.168445    7862 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "ha-763000"
	I0920 10:41:57.168458    7862 start.go:93] Provisioning new machine with config: &{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:41:57.168492    7862 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:41:57.176818    7862 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:41:57.193960    7862 start.go:159] libmachine.API.Create for "ha-763000" (driver="qemu2")
	I0920 10:41:57.193988    7862 client.go:168] LocalClient.Create starting
	I0920 10:41:57.194056    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:41:57.194086    7862 main.go:141] libmachine: Decoding PEM data...
	I0920 10:41:57.194095    7862 main.go:141] libmachine: Parsing certificate...
	I0920 10:41:57.194126    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:41:57.194169    7862 main.go:141] libmachine: Decoding PEM data...
	I0920 10:41:57.194181    7862 main.go:141] libmachine: Parsing certificate...
	I0920 10:41:57.194523    7862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:41:57.380499    7862 main.go:141] libmachine: Creating SSH key...
	I0920 10:41:57.422646    7862 main.go:141] libmachine: Creating Disk image...
	I0920 10:41:57.422651    7862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:41:57.422847    7862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:41:57.432050    7862 main.go:141] libmachine: STDOUT: 
	I0920 10:41:57.432076    7862 main.go:141] libmachine: STDERR: 
	I0920 10:41:57.432137    7862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2 +20000M
	I0920 10:41:57.439995    7862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:41:57.440019    7862 main.go:141] libmachine: STDERR: 
	I0920 10:41:57.440036    7862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:41:57.440041    7862 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:41:57.440050    7862 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:41:57.440080    7862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7a:77:fe:8b:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:41:57.441800    7862 main.go:141] libmachine: STDOUT: 
	I0920 10:41:57.441819    7862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:41:57.441838    7862 client.go:171] duration metric: took 247.846083ms to LocalClient.Create
	I0920 10:41:59.444004    7862 start.go:128] duration metric: took 2.275501417s to createHost
	I0920 10:41:59.444067    7862 start.go:83] releasing machines lock for "ha-763000", held for 2.275623375s
	W0920 10:41:59.444151    7862 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:41:59.456494    7862 out.go:177] * Deleting "ha-763000" in qemu2 ...
	W0920 10:41:59.488555    7862 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:41:59.488573    7862 start.go:729] Will try again in 5 seconds ...
	I0920 10:42:04.490874    7862 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:42:04.491538    7862 start.go:364] duration metric: took 527.041µs to acquireMachinesLock for "ha-763000"
	I0920 10:42:04.491704    7862 start.go:93] Provisioning new machine with config: &{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:42:04.491994    7862 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:42:04.512851    7862 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:42:04.565854    7862 start.go:159] libmachine.API.Create for "ha-763000" (driver="qemu2")
	I0920 10:42:04.565906    7862 client.go:168] LocalClient.Create starting
	I0920 10:42:04.566021    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:42:04.566085    7862 main.go:141] libmachine: Decoding PEM data...
	I0920 10:42:04.566104    7862 main.go:141] libmachine: Parsing certificate...
	I0920 10:42:04.566176    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:42:04.566224    7862 main.go:141] libmachine: Decoding PEM data...
	I0920 10:42:04.566241    7862 main.go:141] libmachine: Parsing certificate...
	I0920 10:42:04.566910    7862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:42:04.745331    7862 main.go:141] libmachine: Creating SSH key...
	I0920 10:42:04.777665    7862 main.go:141] libmachine: Creating Disk image...
	I0920 10:42:04.777670    7862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:42:04.777850    7862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:42:04.786946    7862 main.go:141] libmachine: STDOUT: 
	I0920 10:42:04.786963    7862 main.go:141] libmachine: STDERR: 
	I0920 10:42:04.787025    7862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2 +20000M
	I0920 10:42:04.794759    7862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:42:04.794775    7862 main.go:141] libmachine: STDERR: 
	I0920 10:42:04.794787    7862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:42:04.794793    7862 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:42:04.794799    7862 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:42:04.794835    7862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:8e:6c:27:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:42:04.796439    7862 main.go:141] libmachine: STDOUT: 
	I0920 10:42:04.796454    7862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:42:04.796467    7862 client.go:171] duration metric: took 230.555667ms to LocalClient.Create
	I0920 10:42:06.798634    7862 start.go:128] duration metric: took 2.306623375s to createHost
	I0920 10:42:06.798690    7862 start.go:83] releasing machines lock for "ha-763000", held for 2.307119333s
	W0920 10:42:06.799031    7862 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:42:06.817884    7862 out.go:201] 
	W0920 10:42:06.822831    7862 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:42:06.822855    7862 out.go:270] * 
	* 
	W0920 10:42:06.825649    7862 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:42:06.838734    7862 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-763000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (68.070959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (87.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.167541ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-763000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- rollout status deployment/busybox: exit status 1 (58.315792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.914958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:07.098777    7191 retry.go:31] will retry after 1.234158382s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.740042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:08.440016    7191 retry.go:31] will retry after 2.166481869s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.260125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:10.715030    7191 retry.go:31] will retry after 2.19835616s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.065959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:13.020858    7191 retry.go:31] will retry after 2.135779478s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.957417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:15.264169    7191 retry.go:31] will retry after 5.698368701s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.116625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:21.070982    7191 retry.go:31] will retry after 9.64574194s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.647875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:30.823754    7191 retry.go:31] will retry after 12.111481446s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.650042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:42:43.043164    7191 retry.go:31] will retry after 19.410844097s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.522416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:43:02.560811    7191 retry.go:31] will retry after 31.025879782s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.363875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.0145ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.726917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.645417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.193667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.267667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (87.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-763000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.801083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-763000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.546958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-763000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-763000 -v=7 --alsologtostderr: exit status 83 (47.159416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.072540    7945 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.073160    7945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.073163    7945 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.073166    7945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.073331    7945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.073547    7945 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.073759    7945 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.078767    7945 out.go:177] * The control-plane node ha-763000 host is not running: state=Stopped
	I0920 10:43:34.084761    7945 out.go:177]   To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-763000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.720792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-763000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-763000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.752ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-763000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-763000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-763000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.595417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-763000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-763000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status --output json -v=7 --alsologtostderr: exit status 7 (30.696417ms)

                                                
                                                
-- stdout --
	{"Name":"ha-763000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.285779    7957 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.285908    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.285911    7957 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.285914    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.286040    7957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.286171    7957 out.go:352] Setting JSON to true
	I0920 10:43:34.286181    7957 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.286243    7957 notify.go:220] Checking for updates...
	I0920 10:43:34.286389    7957 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.286398    7957 status.go:174] checking status of ha-763000 ...
	I0920 10:43:34.286629    7957 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:34.286632    7957 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:34.286634    7957 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-763000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.716584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 node stop m02 -v=7 --alsologtostderr: exit status 85 (43.982333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.347601    7961 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.348197    7961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.348201    7961 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.348204    7961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.348375    7961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.348695    7961 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.348901    7961 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.352037    7961 out.go:201] 
	W0920 10:43:34.353292    7961 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0920 10:43:34.353297    7961 out.go:270] * 
	* 
	W0920 10:43:34.355196    7961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:43:34.357939    7961 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-763000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (30.976084ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.392161    7963 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.392317    7963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.392323    7963 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.392325    7963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.392460    7963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.392595    7963 out.go:352] Setting JSON to false
	I0920 10:43:34.392609    7963 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.392672    7963 notify.go:220] Checking for updates...
	I0920 10:43:34.392841    7963 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.392850    7963 status.go:174] checking status of ha-763000 ...
	I0920 10:43:34.393085    7963 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:34.393088    7963 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:34.393090    7963 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.476958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-763000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.092333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.445ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.529324    7972 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.529963    7972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.529968    7972 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.529971    7972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.530148    7972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.530357    7972 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.530559    7972 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.533878    7972 out.go:201] 
	W0920 10:43:34.537899    7972 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0920 10:43:34.537904    7972 out.go:270] * 
	* 
	W0920 10:43:34.539901    7972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:43:34.543922    7972 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0920 10:43:34.529324    7972 out.go:345] Setting OutFile to fd 1 ...
I0920 10:43:34.529963    7972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:43:34.529968    7972 out.go:358] Setting ErrFile to fd 2...
I0920 10:43:34.529971    7972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:43:34.530148    7972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:43:34.530357    7972 mustload.go:65] Loading cluster: ha-763000
I0920 10:43:34.530559    7972 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:43:34.533878    7972 out.go:201] 
W0920 10:43:34.537899    7972 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0920 10:43:34.537904    7972 out.go:270] * 
* 
W0920 10:43:34.539901    7972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:43:34.543922    7972 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-763000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (30.787916ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.578010    7974 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.578164    7974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.578167    7974 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.578169    7974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.578302    7974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:34.578428    7974 out.go:352] Setting JSON to false
	I0920 10:43:34.578439    7974 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:34.578506    7974 notify.go:220] Checking for updates...
	I0920 10:43:34.578650    7974 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.578660    7974 status.go:174] checking status of ha-763000 ...
	I0920 10:43:34.578892    7974 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:34.578896    7974 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:34.578898    7974 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:34.579718    7191 retry.go:31] will retry after 918.278425ms: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (74.637ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:35.572884    7976 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:35.573066    7976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:35.573071    7976 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:35.573073    7976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:35.573229    7976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:35.573381    7976 out.go:352] Setting JSON to false
	I0920 10:43:35.573393    7976 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:35.573438    7976 notify.go:220] Checking for updates...
	I0920 10:43:35.573651    7976 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:35.573661    7976 status.go:174] checking status of ha-763000 ...
	I0920 10:43:35.573965    7976 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:35.573970    7976 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:35.573973    7976 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:35.574979    7191 retry.go:31] will retry after 1.18853798s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (73.935875ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:36.837613    7978 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:36.837853    7978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:36.837858    7978 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:36.837861    7978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:36.838045    7978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:36.838204    7978 out.go:352] Setting JSON to false
	I0920 10:43:36.838218    7978 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:36.838257    7978 notify.go:220] Checking for updates...
	I0920 10:43:36.838489    7978 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:36.838500    7978 status.go:174] checking status of ha-763000 ...
	I0920 10:43:36.838820    7978 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:36.838825    7978 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:36.838828    7978 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:36.839920    7191 retry.go:31] will retry after 2.436567283s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (73.856459ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:39.350515    7980 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:39.350695    7980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:39.350699    7980 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:39.350703    7980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:39.350878    7980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:39.351065    7980 out.go:352] Setting JSON to false
	I0920 10:43:39.351079    7980 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:39.351107    7980 notify.go:220] Checking for updates...
	I0920 10:43:39.351397    7980 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:39.351414    7980 status.go:174] checking status of ha-763000 ...
	I0920 10:43:39.351708    7980 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:39.351714    7980 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:39.351716    7980 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:39.352788    7191 retry.go:31] will retry after 3.50185391s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (73.674042ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:42.928584    7982 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:42.928774    7982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:42.928778    7982 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:42.928782    7982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:42.928972    7982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:42.929136    7982 out.go:352] Setting JSON to false
	I0920 10:43:42.929150    7982 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:42.929203    7982 notify.go:220] Checking for updates...
	I0920 10:43:42.929436    7982 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:42.929446    7982 status.go:174] checking status of ha-763000 ...
	I0920 10:43:42.929764    7982 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:42.929769    7982 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:42.929772    7982 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:42.930883    7191 retry.go:31] will retry after 6.565199641s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (76.077084ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:49.572199    7987 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:49.572371    7987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:49.572375    7987 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:49.572378    7987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:49.572582    7987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:49.572733    7987 out.go:352] Setting JSON to false
	I0920 10:43:49.572746    7987 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:49.572787    7987 notify.go:220] Checking for updates...
	I0920 10:43:49.573036    7987 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:49.573045    7987 status.go:174] checking status of ha-763000 ...
	I0920 10:43:49.573385    7987 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:49.573390    7987 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:49.573392    7987 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:49.574488    7191 retry.go:31] will retry after 7.958564341s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (74.786708ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:57.607922    7989 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:57.608108    7989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:57.608112    7989 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:57.608116    7989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:57.608268    7989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:43:57.608418    7989 out.go:352] Setting JSON to false
	I0920 10:43:57.608432    7989 mustload.go:65] Loading cluster: ha-763000
	I0920 10:43:57.608470    7989 notify.go:220] Checking for updates...
	I0920 10:43:57.608700    7989 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:57.608710    7989 status.go:174] checking status of ha-763000 ...
	I0920 10:43:57.609032    7989 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:43:57.609036    7989 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:57.609039    7989 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:43:57.610157    7191 retry.go:31] will retry after 8.266151936s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (74.872625ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:05.951345    7994 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:05.951552    7994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:05.951556    7994 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:05.951559    7994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:05.951728    7994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:05.951890    7994 out.go:352] Setting JSON to false
	I0920 10:44:05.951903    7994 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:05.951951    7994 notify.go:220] Checking for updates...
	I0920 10:44:05.952162    7994 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:05.952176    7994 status.go:174] checking status of ha-763000 ...
	I0920 10:44:05.952487    7994 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:44:05.952492    7994 status.go:377] host is not running, skipping remaining checks
	I0920 10:44:05.952494    7994 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:44:05.953560    7191 retry.go:31] will retry after 10.437721183s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (75.735916ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:16.467057    7996 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:16.467266    7996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:16.467271    7996 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:16.467274    7996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:16.467443    7996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:16.467588    7996 out.go:352] Setting JSON to false
	I0920 10:44:16.467601    7996 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:16.467639    7996 notify.go:220] Checking for updates...
	I0920 10:44:16.467875    7996 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:16.467890    7996 status.go:174] checking status of ha-763000 ...
	I0920 10:44:16.468197    7996 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:44:16.468202    7996 status.go:377] host is not running, skipping remaining checks
	I0920 10:44:16.468204    7996 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (33.521584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (42.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-763000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-763000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.545083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-763000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-763000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-763000 -v=7 --alsologtostderr: (2.560940958s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-763000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-763000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226145792s)

                                                
                                                
-- stdout --
	* [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	* Restarting existing qemu2 VM for "ha-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:19.235059    8025 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:19.235224    8025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:19.235228    8025 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:19.235232    8025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:19.235420    8025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:19.236660    8025 out.go:352] Setting JSON to false
	I0920 10:44:19.255724    8025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4422,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:44:19.255792    8025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:19.260695    8025 out.go:177] * [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:19.265282    8025 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:44:19.265307    8025 notify.go:220] Checking for updates...
	I0920 10:44:19.273600    8025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:44:19.277570    8025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:19.280578    8025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:19.283647    8025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:44:19.286585    8025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:44:19.289916    8025 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:19.289972    8025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:19.294576    8025 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:44:19.301600    8025 start.go:297] selected driver: qemu2
	I0920 10:44:19.301607    8025 start.go:901] validating driver "qemu2" against &{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:19.301673    8025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:19.304087    8025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:44:19.304113    8025 cni.go:84] Creating CNI manager for ""
	I0920 10:44:19.304144    8025 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:44:19.304198    8025 start.go:340] cluster config:
	{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:19.307843    8025 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:19.316548    8025 out.go:177] * Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	I0920 10:44:19.320629    8025 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:19.320666    8025 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:19.320675    8025 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:19.320750    8025 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:19.320757    8025 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:19.320819    8025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/ha-763000/config.json ...
	I0920 10:44:19.321343    8025 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:19.321382    8025 start.go:364] duration metric: took 31.833µs to acquireMachinesLock for "ha-763000"
	I0920 10:44:19.321392    8025 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:44:19.321397    8025 fix.go:54] fixHost starting: 
	I0920 10:44:19.321531    8025 fix.go:112] recreateIfNeeded on ha-763000: state=Stopped err=<nil>
	W0920 10:44:19.321540    8025 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:44:19.328543    8025 out.go:177] * Restarting existing qemu2 VM for "ha-763000" ...
	I0920 10:44:19.332613    8025 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:19.332649    8025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:8e:6c:27:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:44:19.334691    8025 main.go:141] libmachine: STDOUT: 
	I0920 10:44:19.334707    8025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:19.334737    8025 fix.go:56] duration metric: took 13.337417ms for fixHost
	I0920 10:44:19.334741    8025 start.go:83] releasing machines lock for "ha-763000", held for 13.354625ms
	W0920 10:44:19.334747    8025 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:19.334788    8025 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:19.334793    8025 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:24.336954    8025 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:24.337423    8025 start.go:364] duration metric: took 349.042µs to acquireMachinesLock for "ha-763000"
	I0920 10:44:24.337555    8025 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:44:24.337574    8025 fix.go:54] fixHost starting: 
	I0920 10:44:24.338248    8025 fix.go:112] recreateIfNeeded on ha-763000: state=Stopped err=<nil>
	W0920 10:44:24.338274    8025 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:44:24.346707    8025 out.go:177] * Restarting existing qemu2 VM for "ha-763000" ...
	I0920 10:44:24.350576    8025 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:24.350903    8025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:8e:6c:27:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:44:24.359605    8025 main.go:141] libmachine: STDOUT: 
	I0920 10:44:24.359656    8025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:24.359723    8025 fix.go:56] duration metric: took 22.1485ms for fixHost
	I0920 10:44:24.359740    8025 start.go:83] releasing machines lock for "ha-763000", held for 22.296ms
	W0920 10:44:24.359883    8025 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:24.369612    8025 out.go:201] 
	W0920 10:44:24.373757    8025 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:24.373786    8025 out.go:270] * 
	* 
	W0920 10:44:24.376254    8025 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:24.389561    8025 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-763000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-763000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (32.44825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.085416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:24.531190    8037 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:24.531603    8037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:24.531607    8037 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:24.531609    8037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:24.531783    8037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:24.532022    8037 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:24.532243    8037 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:24.536856    8037 out.go:177] * The control-plane node ha-763000 host is not running: state=Stopped
	I0920 10:44:24.539900    8037 out.go:177]   To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-763000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (30.3755ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:24.572478    8039 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:24.572627    8039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:24.572630    8039 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:24.572633    8039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:24.572743    8039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:24.572852    8039 out.go:352] Setting JSON to false
	I0920 10:44:24.572862    8039 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:24.572931    8039 notify.go:220] Checking for updates...
	I0920 10:44:24.573075    8039 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:24.573085    8039 status.go:174] checking status of ha-763000 ...
	I0920 10:44:24.573335    8039 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:44:24.573338    8039 status.go:377] host is not running, skipping remaining checks
	I0920 10:44:24.573340    8039 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (31.168833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-763000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (31.000333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-763000 stop -v=7 --alsologtostderr: (3.018662458s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr: exit status 7 (66.910292ms)

                                                
                                                
-- stdout --
	ha-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:27.768813    8066 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:27.769012    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.769017    8066 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:27.769019    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.769225    8066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:27.769373    8066 out.go:352] Setting JSON to false
	I0920 10:44:27.769387    8066 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:27.769427    8066 notify.go:220] Checking for updates...
	I0920 10:44:27.769665    8066 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:27.769680    8066 status.go:174] checking status of ha-763000 ...
	I0920 10:44:27.769987    8066 status.go:364] ha-763000 host status = "Stopped" (err=<nil>)
	I0920 10:44:27.769991    8066 status.go:377] host is not running, skipping remaining checks
	I0920 10:44:27.769994    8066 status.go:176] ha-763000 status: &{Name:ha-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-763000 status -v=7 --alsologtostderr": ha-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (32.279916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-763000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-763000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182152125s)

                                                
                                                
-- stdout --
	* [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	* Restarting existing qemu2 VM for "ha-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:27.832173    8070 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:27.832299    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.832302    8070 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:27.832304    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.832473    8070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:27.833485    8070 out.go:352] Setting JSON to false
	I0920 10:44:27.849678    8070 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4430,"bootTime":1726849837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:44:27.849755    8070 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:27.854064    8070 out.go:177] * [ha-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:27.860903    8070 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:44:27.860942    8070 notify.go:220] Checking for updates...
	I0920 10:44:27.867909    8070 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:44:27.870968    8070 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:27.873907    8070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:27.876946    8070 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:44:27.879908    8070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:44:27.883216    8070 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:27.883474    8070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:27.887898    8070 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:44:27.894859    8070 start.go:297] selected driver: qemu2
	I0920 10:44:27.894865    8070 start.go:901] validating driver "qemu2" against &{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:27.894912    8070 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:27.897235    8070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:44:27.897262    8070 cni.go:84] Creating CNI manager for ""
	I0920 10:44:27.897285    8070 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:44:27.897326    8070 start.go:340] cluster config:
	{Name:ha-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-763000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:27.900678    8070 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:27.906873    8070 out.go:177] * Starting "ha-763000" primary control-plane node in "ha-763000" cluster
	I0920 10:44:27.910936    8070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:27.910954    8070 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:27.910969    8070 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:27.911031    8070 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:27.911042    8070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:27.911123    8070 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/ha-763000/config.json ...
	I0920 10:44:27.911569    8070 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:27.911597    8070 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "ha-763000"
	I0920 10:44:27.911606    8070 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:44:27.911612    8070 fix.go:54] fixHost starting: 
	I0920 10:44:27.911728    8070 fix.go:112] recreateIfNeeded on ha-763000: state=Stopped err=<nil>
	W0920 10:44:27.911736    8070 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:44:27.915889    8070 out.go:177] * Restarting existing qemu2 VM for "ha-763000" ...
	I0920 10:44:27.923732    8070 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:27.923763    8070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:8e:6c:27:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:44:27.925569    8070 main.go:141] libmachine: STDOUT: 
	I0920 10:44:27.925595    8070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:27.925621    8070 fix.go:56] duration metric: took 14.009958ms for fixHost
	I0920 10:44:27.925626    8070 start.go:83] releasing machines lock for "ha-763000", held for 14.025416ms
	W0920 10:44:27.925632    8070 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:27.925662    8070 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:27.925667    8070 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:32.927819    8070 start.go:360] acquireMachinesLock for ha-763000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:32.928242    8070 start.go:364] duration metric: took 340.833µs to acquireMachinesLock for "ha-763000"
	I0920 10:44:32.928383    8070 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:44:32.928404    8070 fix.go:54] fixHost starting: 
	I0920 10:44:32.929148    8070 fix.go:112] recreateIfNeeded on ha-763000: state=Stopped err=<nil>
	W0920 10:44:32.929173    8070 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:44:32.934699    8070 out.go:177] * Restarting existing qemu2 VM for "ha-763000" ...
	I0920 10:44:32.938624    8070 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:32.938784    8070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:8e:6c:27:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/ha-763000/disk.qcow2
	I0920 10:44:32.948348    8070 main.go:141] libmachine: STDOUT: 
	I0920 10:44:32.948405    8070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:32.948547    8070 fix.go:56] duration metric: took 20.122333ms for fixHost
	I0920 10:44:32.948565    8070 start.go:83] releasing machines lock for "ha-763000", held for 20.30325ms
	W0920 10:44:32.948730    8070 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:32.957639    8070 out.go:201] 
	W0920 10:44:32.961564    8070 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:32.961601    8070 out.go:270] * 
	* 
	W0920 10:44:32.964049    8070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:32.972600    8070 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-763000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (69.622542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-763000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (31.215708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-763000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-763000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.403417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:33.166140    8088 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:33.166284    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:33.166287    8088 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:33.166290    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:33.166429    8088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:44:33.166667    8088 mustload.go:65] Loading cluster: ha-763000
	I0920 10:44:33.166878    8088 config.go:182] Loaded profile config "ha-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:33.169813    8088 out.go:177] * The control-plane node ha-763000 host is not running: state=Stopped
	I0920 10:44:33.173782    8088 out.go:177]   To start a cluster, run: "minikube start -p ha-763000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-763000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.543333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-763000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-763000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-763000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-763000 -n ha-763000: exit status 7 (30.458292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-648000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-648000 --driver=qemu2 : exit status 80 (9.995765s)

                                                
                                                
-- stdout --
	* [image-648000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-648000" primary control-plane node in "image-648000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-648000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-648000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-648000 -n image-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-648000 -n image-648000: exit status 7 (72.942375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-854000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-854000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.980590333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af996346-72c1-4ae4-99f2-2fc549888cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-854000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10d3900-aeec-44e5-8429-499cc7d790e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"00e2f4d9-5548-47b9-ac31-9d7177a367d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig"}}
	{"specversion":"1.0","id":"e104e566-f9fd-4e37-80f9-1b1f40e1ad21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"21b7d72e-7cb8-48cf-b156-175168d73c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"da24e668-aaf5-4807-9449-e168d7992607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube"}}
	{"specversion":"1.0","id":"07428a75-0615-4003-9c8a-7228b60ea69b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eb4cf25f-c7a9-4732-a1b2-ad9d8e0d665d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"43e72810-b64f-444b-a4f9-c23b69fbfb69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c71a8d97-eb4a-4b09-9e78-7f971ccdcb1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-854000\" primary control-plane node in \"json-output-854000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"788dac4e-201c-41cd-87bb-b9c4b06ffd01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f573a7a7-42a5-4514-9ebe-d1461c0b265b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-854000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c216ac9-ec41-4973-ba83-138aa9b2fc2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9f29c986-46ea-4f9b-b853-061c3d16457d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4387e432-73c2-47a8-8f71-b1e91150d499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-854000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"812a6e1f-af7e-4cdc-9c86-198944d8979d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"651c0c2f-335a-4cf0-bd0d-62adb6828023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-854000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.98s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-854000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-854000 --output=json --user=testUser: exit status 83 (79.869792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"22c3e144-6df6-45f5-8c8c-c8cf18a63912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-854000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"0d9d427d-8b8e-40eb-9d65-4fe0c764ab07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-854000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-854000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-854000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-854000 --output=json --user=testUser: exit status 83 (45.460958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-854000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-854000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-854000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-579000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-579000 --driver=qemu2 : exit status 80 (10.045572708s)

                                                
                                                
-- stdout --
	* [first-579000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-579000" primary control-plane node in "first-579000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-579000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-579000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-579000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:45:07.481716 -0700 PDT m=+404.258191959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-581000 -n second-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-581000 -n second-581000: exit status 85 (79.681375ms)

                                                
                                                
-- stdout --
	* Profile "second-581000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-581000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-581000" host is not running, skipping log retrieval (state="* Profile \"second-581000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-581000\"")
helpers_test.go:175: Cleaning up "second-581000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-581000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:45:07.666432 -0700 PDT m=+404.442909042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-579000 -n first-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-579000 -n first-579000: exit status 7 (29.535583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-579000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-579000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-309000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-309000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.937813292s)

                                                
                                                
-- stdout --
	* [mount-start-1-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-309000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-309000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-309000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-309000 -n mount-start-1-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-309000 -n mount-start-1-309000: exit status 7 (68.473958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.810183333s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:45:17.994221    8236 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:45:17.994342    8236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:45:17.994346    8236 out.go:358] Setting ErrFile to fd 2...
	I0920 10:45:17.994348    8236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:45:17.994481    8236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:45:17.995578    8236 out.go:352] Setting JSON to false
	I0920 10:45:18.011772    8236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4481,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:45:18.011829    8236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:45:18.018641    8236 out.go:177] * [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:45:18.028508    8236 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:45:18.028565    8236 notify.go:220] Checking for updates...
	I0920 10:45:18.036500    8236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:45:18.039409    8236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:45:18.042490    8236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:45:18.045509    8236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:45:18.053569    8236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:45:18.056681    8236 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:45:18.061429    8236 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:45:18.068460    8236 start.go:297] selected driver: qemu2
	I0920 10:45:18.068467    8236 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:45:18.068474    8236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:45:18.070933    8236 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:45:18.074521    8236 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:45:18.077507    8236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:45:18.077527    8236 cni.go:84] Creating CNI manager for ""
	I0920 10:45:18.077549    8236 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 10:45:18.077554    8236 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:45:18.077587    8236 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:45:18.081689    8236 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:45:18.089300    8236 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0920 10:45:18.093402    8236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:45:18.093419    8236 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:45:18.093470    8236 cache.go:56] Caching tarball of preloaded images
	I0920 10:45:18.093551    8236 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:45:18.093558    8236 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:45:18.093799    8236 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/multinode-483000/config.json ...
	I0920 10:45:18.093812    8236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/multinode-483000/config.json: {Name:mk1e660d4d76fd6c67ff5d1808a6994cc5e72e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:45:18.094316    8236 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:45:18.094358    8236 start.go:364] duration metric: took 33.75µs to acquireMachinesLock for "multinode-483000"
	I0920 10:45:18.094373    8236 start.go:93] Provisioning new machine with config: &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:45:18.094412    8236 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:45:18.102420    8236 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:45:18.121582    8236 start.go:159] libmachine.API.Create for "multinode-483000" (driver="qemu2")
	I0920 10:45:18.121608    8236 client.go:168] LocalClient.Create starting
	I0920 10:45:18.121677    8236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:45:18.121714    8236 main.go:141] libmachine: Decoding PEM data...
	I0920 10:45:18.121725    8236 main.go:141] libmachine: Parsing certificate...
	I0920 10:45:18.121766    8236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:45:18.121791    8236 main.go:141] libmachine: Decoding PEM data...
	I0920 10:45:18.121803    8236 main.go:141] libmachine: Parsing certificate...
	I0920 10:45:18.122232    8236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:45:18.290253    8236 main.go:141] libmachine: Creating SSH key...
	I0920 10:45:18.347015    8236 main.go:141] libmachine: Creating Disk image...
	I0920 10:45:18.347021    8236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:45:18.347206    8236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:18.356350    8236 main.go:141] libmachine: STDOUT: 
	I0920 10:45:18.356370    8236 main.go:141] libmachine: STDERR: 
	I0920 10:45:18.356440    8236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2 +20000M
	I0920 10:45:18.364382    8236 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:45:18.364397    8236 main.go:141] libmachine: STDERR: 
	I0920 10:45:18.364409    8236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:18.364414    8236 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:45:18.364427    8236 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:45:18.364451    8236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:1d:b4:ea:7d:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:18.366042    8236 main.go:141] libmachine: STDOUT: 
	I0920 10:45:18.366056    8236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:45:18.366075    8236 client.go:171] duration metric: took 244.461666ms to LocalClient.Create
	I0920 10:45:20.368241    8236 start.go:128] duration metric: took 2.273819208s to createHost
	I0920 10:45:20.368310    8236 start.go:83] releasing machines lock for "multinode-483000", held for 2.273952541s
	W0920 10:45:20.368398    8236 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:45:20.381619    8236 out.go:177] * Deleting "multinode-483000" in qemu2 ...
	W0920 10:45:20.414648    8236 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:45:20.414674    8236 start.go:729] Will try again in 5 seconds ...
	I0920 10:45:25.416935    8236 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:45:25.417367    8236 start.go:364] duration metric: took 350.375µs to acquireMachinesLock for "multinode-483000"
	I0920 10:45:25.417495    8236 start.go:93] Provisioning new machine with config: &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:45:25.417732    8236 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:45:25.439513    8236 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:45:25.490484    8236 start.go:159] libmachine.API.Create for "multinode-483000" (driver="qemu2")
	I0920 10:45:25.490534    8236 client.go:168] LocalClient.Create starting
	I0920 10:45:25.490645    8236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:45:25.490713    8236 main.go:141] libmachine: Decoding PEM data...
	I0920 10:45:25.490736    8236 main.go:141] libmachine: Parsing certificate...
	I0920 10:45:25.490806    8236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:45:25.490851    8236 main.go:141] libmachine: Decoding PEM data...
	I0920 10:45:25.490862    8236 main.go:141] libmachine: Parsing certificate...
	I0920 10:45:25.491412    8236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:45:25.667770    8236 main.go:141] libmachine: Creating SSH key...
	I0920 10:45:25.705189    8236 main.go:141] libmachine: Creating Disk image...
	I0920 10:45:25.705199    8236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:45:25.705399    8236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:25.714724    8236 main.go:141] libmachine: STDOUT: 
	I0920 10:45:25.714743    8236 main.go:141] libmachine: STDERR: 
	I0920 10:45:25.714802    8236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2 +20000M
	I0920 10:45:25.722677    8236 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:45:25.722692    8236 main.go:141] libmachine: STDERR: 
	I0920 10:45:25.722705    8236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:25.722722    8236 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:45:25.722729    8236 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:45:25.722758    8236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:22:bc:38:bd:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:45:25.724350    8236 main.go:141] libmachine: STDOUT: 
	I0920 10:45:25.724365    8236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:45:25.724381    8236 client.go:171] duration metric: took 233.841125ms to LocalClient.Create
	I0920 10:45:27.726549    8236 start.go:128] duration metric: took 2.308774833s to createHost
	I0920 10:45:27.726594    8236 start.go:83] releasing machines lock for "multinode-483000", held for 2.309210875s
	W0920 10:45:27.726897    8236 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:45:27.743657    8236 out.go:201] 
	W0920 10:45:27.747806    8236 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:45:27.747833    8236 out.go:270] * 
	* 
	W0920 10:45:27.750725    8236 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:45:27.761613    8236 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (72.527583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.939791ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-483000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- rollout status deployment/busybox: exit status 1 (57.374291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.259ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:28.026443    7191 retry.go:31] will retry after 1.03048214s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.600542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:29.163840    7191 retry.go:31] will retry after 801.084119ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.892875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:30.070274    7191 retry.go:31] will retry after 3.18122435s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.609459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:33.357460    7191 retry.go:31] will retry after 4.042132373s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.438708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:37.506423    7191 retry.go:31] will retry after 7.221697743s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.888084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:44.835499    7191 retry.go:31] will retry after 7.055674714s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.041667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:45:51.997528    7191 retry.go:31] will retry after 15.198449631s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.718875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:46:07.303238    7191 retry.go:31] will retry after 9.848915911s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.110417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:46:17.258660    7191 retry.go:31] will retry after 37.169709471s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.065125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:46:54.535064    7191 retry.go:31] will retry after 25.708666499s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.243542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.065834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.408917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.431375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.803916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.719958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.421584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.832625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr: exit status 83 (44.144ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:20.731464    8328 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:20.731610    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:20.731613    8328 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:20.731615    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:20.731757    8328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:20.732000    8328 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:20.732211    8328 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:20.737203    8328 out.go:177] * The control-plane node multinode-483000 host is not running: state=Stopped
	I0920 10:47:20.742044    8328 out.go:177]   To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.55925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-483000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-483000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.437125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-483000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-483000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-483000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (31.124416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-483000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-483000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-483000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-483000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.75975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr: exit status 7 (30.851958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-483000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:20.945431    8340 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:20.945573    8340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:20.945576    8340 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:20.945579    8340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:20.945717    8340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:20.945844    8340 out.go:352] Setting JSON to true
	I0920 10:47:20.945854    8340 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:20.945922    8340 notify.go:220] Checking for updates...
	I0920 10:47:20.946069    8340 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:20.946077    8340 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:20.946331    8340 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:20.946335    8340 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:20.946337    8340 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.749625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node stop m03: exit status 85 (49.607792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status: exit status 7 (30.847291ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (30.813875ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:21.088320    8348 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:21.088445    8348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.088448    8348 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:21.088450    8348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.088579    8348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:21.088691    8348 out.go:352] Setting JSON to false
	I0920 10:47:21.088701    8348 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:21.088764    8348 notify.go:220] Checking for updates...
	I0920 10:47:21.088909    8348 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:21.088919    8348 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:21.089154    8348 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:21.089157    8348 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:21.089159    8348 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (30.227667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.711417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:21.149573    8352 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:21.149963    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.149967    8352 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:21.149970    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.150164    8352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:21.150373    8352 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:21.150579    8352 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:21.154391    8352 out.go:201] 
	W0920 10:47:21.157311    8352 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0920 10:47:21.157320    8352 out.go:270] * 
	* 
	W0920 10:47:21.159278    8352 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:47:21.163368    8352 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0920 10:47:21.149573    8352 out.go:345] Setting OutFile to fd 1 ...
I0920 10:47:21.149963    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:47:21.149967    8352 out.go:358] Setting ErrFile to fd 2...
I0920 10:47:21.149970    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:47:21.150164    8352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
I0920 10:47:21.150373    8352 mustload.go:65] Loading cluster: multinode-483000
I0920 10:47:21.150579    8352 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:47:21.154391    8352 out.go:201] 
W0920 10:47:21.157311    8352 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0920 10:47:21.157320    8352 out.go:270] * 
* 
W0920 10:47:21.159278    8352 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:47:21.163368    8352 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (31.215542ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:21.197802    8354 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:21.197981    8354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.197984    8354 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:21.197987    8354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:21.198167    8354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:21.198294    8354 out.go:352] Setting JSON to false
	I0920 10:47:21.198305    8354 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:21.198353    8354 notify.go:220] Checking for updates...
	I0920 10:47:21.198518    8354 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:21.198529    8354 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:21.198776    8354 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:21.198780    8354 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:21.198782    8354 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:21.199639    7191 retry.go:31] will retry after 1.358247841s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (74.244417ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:22.632427    8356 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:22.632612    8356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:22.632617    8356 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:22.632620    8356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:22.632776    8356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:22.632934    8356 out.go:352] Setting JSON to false
	I0920 10:47:22.632947    8356 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:22.632992    8356 notify.go:220] Checking for updates...
	I0920 10:47:22.633185    8356 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:22.633197    8356 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:22.633495    8356 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:22.633500    8356 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:22.633502    8356 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:22.634528    7191 retry.go:31] will retry after 1.431688677s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (74.519375ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:24.140247    8358 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:24.140428    8358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:24.140432    8358 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:24.140435    8358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:24.140599    8358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:24.140746    8358 out.go:352] Setting JSON to false
	I0920 10:47:24.140760    8358 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:24.140807    8358 notify.go:220] Checking for updates...
	I0920 10:47:24.141037    8358 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:24.141051    8358 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:24.141358    8358 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:24.141363    8358 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:24.141366    8358 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:24.142367    7191 retry.go:31] will retry after 1.788345233s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (74.49275ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:26.005607    8360 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:26.005810    8360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:26.005814    8360 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:26.005817    8360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:26.005971    8360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:26.006118    8360 out.go:352] Setting JSON to false
	I0920 10:47:26.006131    8360 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:26.006175    8360 notify.go:220] Checking for updates...
	I0920 10:47:26.006412    8360 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:26.006426    8360 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:26.006744    8360 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:26.006749    8360 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:26.006752    8360 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:26.007778    7191 retry.go:31] will retry after 3.695540479s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (73.442ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:29.776890    8362 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:29.777079    8362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:29.777083    8362 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:29.777086    8362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:29.777274    8362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:29.777433    8362 out.go:352] Setting JSON to false
	I0920 10:47:29.777446    8362 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:29.777493    8362 notify.go:220] Checking for updates...
	I0920 10:47:29.777715    8362 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:29.777726    8362 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:29.778038    8362 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:29.778043    8362 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:29.778046    8362 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:29.779145    7191 retry.go:31] will retry after 7.435776831s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (75.257834ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:37.290251    8364 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:37.290442    8364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:37.290446    8364 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:37.290449    8364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:37.290625    8364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:37.290783    8364 out.go:352] Setting JSON to false
	I0920 10:47:37.290796    8364 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:37.290822    8364 notify.go:220] Checking for updates...
	I0920 10:47:37.291098    8364 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:37.291109    8364 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:37.291411    8364 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:37.291416    8364 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:37.291419    8364 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:37.292523    7191 retry.go:31] will retry after 7.042887413s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (73.92225ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:44.409595    8366 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:44.409786    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:44.409790    8366 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:44.409793    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:44.409956    8366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:44.410105    8366 out.go:352] Setting JSON to false
	I0920 10:47:44.410119    8366 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:44.410158    8366 notify.go:220] Checking for updates...
	I0920 10:47:44.410393    8366 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:44.410403    8366 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:44.410717    8366 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:44.410722    8366 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:44.410725    8366 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:44.411747    7191 retry.go:31] will retry after 6.70663417s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (76.045125ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:51.194466    8371 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:51.194659    8371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:51.194664    8371 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:51.194667    8371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:51.194819    8371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:47:51.195000    8371 out.go:352] Setting JSON to false
	I0920 10:47:51.195013    8371 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:47:51.195059    8371 notify.go:220] Checking for updates...
	I0920 10:47:51.195300    8371 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:51.195313    8371 status.go:174] checking status of multinode-483000 ...
	I0920 10:47:51.195632    8371 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:47:51.195637    8371 status.go:377] host is not running, skipping remaining checks
	I0920 10:47:51.195639    8371 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:47:51.196699    7191 retry.go:31] will retry after 20.877465197s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (72.429209ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:12.146843    8373 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:12.147029    8373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:12.147033    8373 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:12.147036    8373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:12.147203    8373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:12.147365    8373 out.go:352] Setting JSON to false
	I0920 10:48:12.147388    8373 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:48:12.147426    8373 notify.go:220] Checking for updates...
	I0920 10:48:12.147649    8373 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:12.147660    8373 status.go:174] checking status of multinode-483000 ...
	I0920 10:48:12.147957    8373 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:48:12.147962    8373 status.go:377] host is not running, skipping remaining checks
	I0920 10:48:12.147965    8373 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (33.113084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-483000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-483000: (3.802592167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.214874625s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:16.078064    8401 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:16.078233    8401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:16.078237    8401 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:16.078241    8401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:16.078429    8401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:16.079697    8401 out.go:352] Setting JSON to false
	I0920 10:48:16.098426    8401 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4659,"bootTime":1726849837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:48:16.098509    8401 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:48:16.102707    8401 out.go:177] * [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:48:16.108600    8401 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:48:16.108645    8401 notify.go:220] Checking for updates...
	I0920 10:48:16.115632    8401 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:48:16.118661    8401 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:48:16.121620    8401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:48:16.124664    8401 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:48:16.126010    8401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:48:16.128898    8401 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:16.128953    8401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:48:16.133675    8401 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:48:16.138542    8401 start.go:297] selected driver: qemu2
	I0920 10:48:16.138547    8401 start.go:901] validating driver "qemu2" against &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:16.138596    8401 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:48:16.140957    8401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:48:16.140984    8401 cni.go:84] Creating CNI manager for ""
	I0920 10:48:16.141012    8401 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:48:16.141061    8401 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:16.144859    8401 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:16.152557    8401 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0920 10:48:16.156639    8401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:48:16.156654    8401 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:48:16.156661    8401 cache.go:56] Caching tarball of preloaded images
	I0920 10:48:16.156726    8401 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:48:16.156732    8401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:48:16.156821    8401 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/multinode-483000/config.json ...
	I0920 10:48:16.157294    8401 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:16.157331    8401 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "multinode-483000"
	I0920 10:48:16.157341    8401 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:16.157345    8401 fix.go:54] fixHost starting: 
	I0920 10:48:16.157471    8401 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0920 10:48:16.157479    8401 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:16.165617    8401 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0920 10:48:16.169705    8401 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:16.169745    8401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:22:bc:38:bd:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:48:16.172035    8401 main.go:141] libmachine: STDOUT: 
	I0920 10:48:16.172055    8401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:16.172087    8401 fix.go:56] duration metric: took 14.740292ms for fixHost
	I0920 10:48:16.172093    8401 start.go:83] releasing machines lock for "multinode-483000", held for 14.756792ms
	W0920 10:48:16.172100    8401 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:16.172141    8401 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:16.172146    8401 start.go:729] Will try again in 5 seconds ...
	I0920 10:48:21.174344    8401 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:21.174767    8401 start.go:364] duration metric: took 334.875µs to acquireMachinesLock for "multinode-483000"
	I0920 10:48:21.174898    8401 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:21.174922    8401 fix.go:54] fixHost starting: 
	I0920 10:48:21.175621    8401 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0920 10:48:21.175651    8401 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:21.180026    8401 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0920 10:48:21.184063    8401 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:21.184302    8401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:22:bc:38:bd:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:48:21.193256    8401 main.go:141] libmachine: STDOUT: 
	I0920 10:48:21.193326    8401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:21.193406    8401 fix.go:56] duration metric: took 18.489292ms for fixHost
	I0920 10:48:21.193425    8401 start.go:83] releasing machines lock for "multinode-483000", held for 18.636125ms
	W0920 10:48:21.193596    8401 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:21.201024    8401 out.go:201] 
	W0920 10:48:21.205036    8401 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:21.205053    8401 out.go:270] * 
	* 
	W0920 10:48:21.207370    8401 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:48:21.217046    8401 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-483000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (33.514583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node delete m03: exit status 83 (41.91375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (30.921792ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:21.404938    8415 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:21.405090    8415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:21.405093    8415 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:21.405095    8415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:21.405222    8415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:21.405340    8415 out.go:352] Setting JSON to false
	I0920 10:48:21.405352    8415 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:48:21.405409    8415 notify.go:220] Checking for updates...
	I0920 10:48:21.405576    8415 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:21.405585    8415 status.go:174] checking status of multinode-483000 ...
	I0920 10:48:21.405828    8415 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:48:21.405831    8415 status.go:377] host is not running, skipping remaining checks
	I0920 10:48:21.405833    8415 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (31.086667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-483000 stop: (3.373242209s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status: exit status 7 (67.261083ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (32.498666ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:24.909621    8441 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:24.909755    8441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:24.909759    8441 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:24.909761    8441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:24.909884    8441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:24.910009    8441 out.go:352] Setting JSON to false
	I0920 10:48:24.910025    8441 mustload.go:65] Loading cluster: multinode-483000
	I0920 10:48:24.910075    8441 notify.go:220] Checking for updates...
	I0920 10:48:24.910245    8441 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:24.910253    8441 status.go:174] checking status of multinode-483000 ...
	I0920 10:48:24.910487    8441 status.go:364] multinode-483000 host status = "Stopped" (err=<nil>)
	I0920 10:48:24.910491    8441 status.go:377] host is not running, skipping remaining checks
	I0920 10:48:24.910493    8441 status.go:176] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (31.007667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182534625s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:24.971117    8445 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:24.971257    8445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:24.971261    8445 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:24.971263    8445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:24.971387    8445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:24.972393    8445 out.go:352] Setting JSON to false
	I0920 10:48:24.988519    8445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4667,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:48:24.988589    8445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:48:24.993742    8445 out.go:177] * [multinode-483000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:48:25.000731    8445 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:48:25.000774    8445 notify.go:220] Checking for updates...
	I0920 10:48:25.008739    8445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:48:25.011721    8445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:48:25.014689    8445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:48:25.017725    8445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:48:25.020716    8445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:48:25.023969    8445 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:25.024231    8445 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:48:25.028672    8445 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:48:25.035670    8445 start.go:297] selected driver: qemu2
	I0920 10:48:25.035676    8445 start.go:901] validating driver "qemu2" against &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:25.035742    8445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:48:25.038146    8445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:48:25.038169    8445 cni.go:84] Creating CNI manager for ""
	I0920 10:48:25.038194    8445 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:48:25.038241    8445 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-483000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:25.041824    8445 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:25.049619    8445 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0920 10:48:25.053707    8445 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:48:25.053726    8445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:48:25.053738    8445 cache.go:56] Caching tarball of preloaded images
	I0920 10:48:25.053807    8445 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:48:25.053813    8445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:48:25.053883    8445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/multinode-483000/config.json ...
	I0920 10:48:25.054327    8445 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:25.054356    8445 start.go:364] duration metric: took 22.458µs to acquireMachinesLock for "multinode-483000"
	I0920 10:48:25.054366    8445 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:25.054371    8445 fix.go:54] fixHost starting: 
	I0920 10:48:25.054502    8445 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0920 10:48:25.054511    8445 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:25.058546    8445 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0920 10:48:25.066696    8445 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:25.066737    8445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:22:bc:38:bd:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:48:25.068757    8445 main.go:141] libmachine: STDOUT: 
	I0920 10:48:25.068776    8445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:25.068805    8445 fix.go:56] duration metric: took 14.432209ms for fixHost
	I0920 10:48:25.068811    8445 start.go:83] releasing machines lock for "multinode-483000", held for 14.450042ms
	W0920 10:48:25.068817    8445 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:25.068855    8445 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:25.068860    8445 start.go:729] Will try again in 5 seconds ...
	I0920 10:48:30.070993    8445 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:30.071360    8445 start.go:364] duration metric: took 282.75µs to acquireMachinesLock for "multinode-483000"
	I0920 10:48:30.071482    8445 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:30.071502    8445 fix.go:54] fixHost starting: 
	I0920 10:48:30.072196    8445 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0920 10:48:30.072222    8445 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:30.076730    8445 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0920 10:48:30.080584    8445 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:30.080891    8445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:22:bc:38:bd:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/multinode-483000/disk.qcow2
	I0920 10:48:30.089510    8445 main.go:141] libmachine: STDOUT: 
	I0920 10:48:30.089564    8445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:30.089643    8445 fix.go:56] duration metric: took 18.145708ms for fixHost
	I0920 10:48:30.089660    8445 start.go:83] releasing machines lock for "multinode-483000", held for 18.27725ms
	W0920 10:48:30.089803    8445 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:30.097602    8445 out.go:201] 
	W0920 10:48:30.101713    8445 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:30.101739    8445 out.go:270] * 
	* 
	W0920 10:48:30.104544    8445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:48:30.112587    8445 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (73.449334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000-m01 --driver=qemu2 : exit status 80 (9.967558625s)

                                                
                                                
-- stdout --
	* [multinode-483000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000-m01" primary control-plane node in "multinode-483000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 : exit status 80 (9.997554167s)

                                                
                                                
-- stdout --
	* [multinode-483000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000-m02" primary control-plane node in "multinode-483000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-483000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-483000: exit status 83 (80.056416ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-483000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (31.204583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.19s)

                                                
                                    
x
+
TestPreload (9.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-282000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-282000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.832522417s)

                                                
                                                
-- stdout --
	* [test-preload-282000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-282000" primary control-plane node in "test-preload-282000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-282000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:50.531200    8503 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:50.531321    8503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:50.531325    8503 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:50.531328    8503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:50.531463    8503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:48:50.532504    8503 out.go:352] Setting JSON to false
	I0920 10:48:50.548864    8503 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4693,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:48:50.548930    8503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:48:50.555237    8503 out.go:177] * [test-preload-282000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:48:50.563213    8503 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:48:50.563254    8503 notify.go:220] Checking for updates...
	I0920 10:48:50.571178    8503 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:48:50.574226    8503 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:48:50.577276    8503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:48:50.580192    8503 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:48:50.583237    8503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:48:50.586607    8503 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:48:50.586659    8503 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:48:50.591186    8503 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:48:50.598228    8503 start.go:297] selected driver: qemu2
	I0920 10:48:50.598237    8503 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:48:50.598245    8503 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:48:50.600545    8503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:48:50.604177    8503 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:48:50.607365    8503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:48:50.607393    8503 cni.go:84] Creating CNI manager for ""
	I0920 10:48:50.607426    8503 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:48:50.607430    8503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:48:50.607466    8503 start.go:340] cluster config:
	{Name:test-preload-282000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-282000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:50.611089    8503 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.618190    8503 out.go:177] * Starting "test-preload-282000" primary control-plane node in "test-preload-282000" cluster
	I0920 10:48:50.622203    8503 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0920 10:48:50.622280    8503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/test-preload-282000/config.json ...
	I0920 10:48:50.622304    8503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/test-preload-282000/config.json: {Name:mka0a1961c454036f677107d6be7365fec90e123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:50.622366    8503 cache.go:107] acquiring lock: {Name:mkc831d2b996411ad9b2ce79b491563b42f25287 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622386    8503 cache.go:107] acquiring lock: {Name:mk1a608ef21d999ed31a1bd7eab28e7dcf67376c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622374    8503 cache.go:107] acquiring lock: {Name:mk6e40fad82d4809106b18ed9ae4f5c0a3381efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622359    8503 cache.go:107] acquiring lock: {Name:mk3e2bbcc68df508dbf854add48a44afb7944430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622527    8503 cache.go:107] acquiring lock: {Name:mk563d796a893b8eded92c420d3ed36a8c422e0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622564    8503 cache.go:107] acquiring lock: {Name:mk32caed4e14625706c779054405cc9779e91ffe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622587    8503 cache.go:107] acquiring lock: {Name:mk885206e5b4d47b87e43cf24f7dcbac707f3d69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622612    8503 cache.go:107] acquiring lock: {Name:mk10ca16c1348ba8cc131fc86aa0592ae2da7ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:50.622740    8503 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:48:50.622751    8503 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:48:50.622784    8503 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:48:50.622789    8503 start.go:360] acquireMachinesLock for test-preload-282000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:50.622857    8503 start.go:364] duration metric: took 37.041µs to acquireMachinesLock for "test-preload-282000"
	I0920 10:48:50.622861    8503 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:48:50.622870    8503 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:48:50.622878    8503 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:48:50.622874    8503 start.go:93] Provisioning new machine with config: &{Name:test-preload-282000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-282000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:48:50.622909    8503 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:48:50.622858    8503 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:48:50.623399    8503 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:48:50.627166    8503 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:48:50.633390    8503 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:48:50.634794    8503 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:48:50.634788    8503 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:48:50.634792    8503 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:48:50.634849    8503 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:48:50.634848    8503 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:48:50.635130    8503 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:48:50.635268    8503 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:48:50.645827    8503 start.go:159] libmachine.API.Create for "test-preload-282000" (driver="qemu2")
	I0920 10:48:50.645848    8503 client.go:168] LocalClient.Create starting
	I0920 10:48:50.645937    8503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:48:50.645968    8503 main.go:141] libmachine: Decoding PEM data...
	I0920 10:48:50.645981    8503 main.go:141] libmachine: Parsing certificate...
	I0920 10:48:50.646026    8503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:48:50.646049    8503 main.go:141] libmachine: Decoding PEM data...
	I0920 10:48:50.646068    8503 main.go:141] libmachine: Parsing certificate...
	I0920 10:48:50.646368    8503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:48:50.813249    8503 main.go:141] libmachine: Creating SSH key...
	I0920 10:48:50.877597    8503 main.go:141] libmachine: Creating Disk image...
	I0920 10:48:50.877635    8503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:48:50.877825    8503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:50.887260    8503 main.go:141] libmachine: STDOUT: 
	I0920 10:48:50.887280    8503 main.go:141] libmachine: STDERR: 
	I0920 10:48:50.887330    8503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2 +20000M
	I0920 10:48:50.896461    8503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:48:50.896483    8503 main.go:141] libmachine: STDERR: 
	I0920 10:48:50.896500    8503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:50.896506    8503 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:48:50.896519    8503 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:50.896553    8503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:1c:1b:61:d0:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:50.898930    8503 main.go:141] libmachine: STDOUT: 
	I0920 10:48:50.898946    8503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:50.898966    8503 client.go:171] duration metric: took 253.114458ms to LocalClient.Create
	I0920 10:48:50.984969    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0920 10:48:50.985606    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:48:51.042293    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0920 10:48:51.095271    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0920 10:48:51.119701    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0920 10:48:51.119725    8503 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 497.255333ms
	I0920 10:48:51.119741    8503 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0920 10:48:51.146231    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0920 10:48:51.147266    8503 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:48:51.147346    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:48:51.191191    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0920 10:48:51.632407    8503 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:48:51.632515    8503 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:48:52.161993    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:48:52.162039    8503 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.53968275s
	I0920 10:48:52.162067    8503 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:48:52.353499    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0920 10:48:52.353571    8503 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 1.731217375s
	I0920 10:48:52.353613    8503 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0920 10:48:52.851310    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0920 10:48:52.851359    8503 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.228856375s
	I0920 10:48:52.851387    8503 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0920 10:48:52.899149    8503 start.go:128] duration metric: took 2.276233875s to createHost
	I0920 10:48:52.899179    8503 start.go:83] releasing machines lock for "test-preload-282000", held for 2.276324s
	W0920 10:48:52.899239    8503 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:52.919602    8503 out.go:177] * Deleting "test-preload-282000" in qemu2 ...
	W0920 10:48:52.957032    8503 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:52.957053    8503 start.go:729] Will try again in 5 seconds ...
	I0920 10:48:55.076411    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0920 10:48:55.076490    8503 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.453939958s
	I0920 10:48:55.076532    8503 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0920 10:48:55.444818    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0920 10:48:55.444872    8503 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.822502458s
	I0920 10:48:55.444895    8503 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0920 10:48:55.849945    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0920 10:48:55.849989    8503 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.227663167s
	I0920 10:48:55.850027    8503 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0920 10:48:57.957174    8503 start.go:360] acquireMachinesLock for test-preload-282000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:57.957585    8503 start.go:364] duration metric: took 338.583µs to acquireMachinesLock for "test-preload-282000"
	I0920 10:48:57.957713    8503 start.go:93] Provisioning new machine with config: &{Name:test-preload-282000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-282000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:48:57.957946    8503 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:48:57.963253    8503 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:48:58.014637    8503 start.go:159] libmachine.API.Create for "test-preload-282000" (driver="qemu2")
	I0920 10:48:58.014684    8503 client.go:168] LocalClient.Create starting
	I0920 10:48:58.014792    8503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:48:58.014869    8503 main.go:141] libmachine: Decoding PEM data...
	I0920 10:48:58.014888    8503 main.go:141] libmachine: Parsing certificate...
	I0920 10:48:58.014945    8503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:48:58.014990    8503 main.go:141] libmachine: Decoding PEM data...
	I0920 10:48:58.015000    8503 main.go:141] libmachine: Parsing certificate...
	I0920 10:48:58.015493    8503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:48:58.191185    8503 main.go:141] libmachine: Creating SSH key...
	I0920 10:48:58.259874    8503 main.go:141] libmachine: Creating Disk image...
	I0920 10:48:58.259886    8503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:48:58.260062    8503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:58.269495    8503 main.go:141] libmachine: STDOUT: 
	I0920 10:48:58.269514    8503 main.go:141] libmachine: STDERR: 
	I0920 10:48:58.269581    8503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2 +20000M
	I0920 10:48:58.277633    8503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:48:58.277651    8503 main.go:141] libmachine: STDERR: 
	I0920 10:48:58.277664    8503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:58.277668    8503 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:48:58.277677    8503 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:58.277714    8503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:63:91:e2:02:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/test-preload-282000/disk.qcow2
	I0920 10:48:58.279341    8503 main.go:141] libmachine: STDOUT: 
	I0920 10:48:58.279355    8503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:58.279370    8503 client.go:171] duration metric: took 264.681959ms to LocalClient.Create
	I0920 10:48:59.889349    8503 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0920 10:48:59.889432    8503 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.266861625s
	I0920 10:48:59.889478    8503 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0920 10:48:59.889528    8503 cache.go:87] Successfully saved all images to host disk.
	I0920 10:49:00.281628    8503 start.go:128] duration metric: took 2.323661125s to createHost
	I0920 10:49:00.281688    8503 start.go:83] releasing machines lock for "test-preload-282000", held for 2.324092916s
	W0920 10:49:00.281980    8503 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-282000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-282000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:49:00.302621    8503 out.go:201] 
	W0920 10:49:00.306721    8503 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:49:00.306748    8503 out.go:270] * 
	* 
	W0920 10:49:00.309560    8503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:49:00.321647    8503 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-282000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-20 10:49:00.337626 -0700 PDT m=+637.115335501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-282000 -n test-preload-282000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-282000 -n test-preload-282000: exit status 7 (68.547333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-282000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-282000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-282000
--- FAIL: TestPreload (9.99s)

                                                
                                    
x
+
TestScheduledStopUnix (10.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-085000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-085000 --memory=2048 --driver=qemu2 : exit status 80 (9.941315875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-085000" primary control-plane node in "scheduled-stop-085000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-085000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-085000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-085000" primary control-plane node in "scheduled-stop-085000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-085000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-085000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-20 10:49:10.431128 -0700 PDT m=+647.208891084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-085000 -n scheduled-stop-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-085000 -n scheduled-stop-085000: exit status 7 (68.638167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-085000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-085000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-085000
--- FAIL: TestScheduledStopUnix (10.10s)

                                                
                                    
x
+
TestSkaffold (12.3s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3656956934 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3656956934 version: (1.063712083s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-881000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-881000 --memory=2600 --driver=qemu2 : exit status 80 (9.975894125s)

                                                
                                                
-- stdout --
	* [skaffold-881000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-881000" primary control-plane node in "skaffold-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-881000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-881000" primary control-plane node in "skaffold-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-20 10:49:22.742381 -0700 PDT m=+659.520209417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-881000 -n skaffold-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-881000 -n skaffold-881000: exit status 7 (60.350083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-881000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-881000
--- FAIL: TestSkaffold (12.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (605.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2948501082 start -p running-upgrade-568000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2948501082 start -p running-upgrade-568000 --memory=2200 --vm-driver=qemu2 : (53.231959875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m37.586367667s)

                                                
                                                
-- stdout --
	* [running-upgrade-568000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-568000" primary control-plane node in "running-upgrade-568000" cluster
	* Updating the running qemu2 "running-upgrade-568000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:50:57.837325    8893 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:50:57.837489    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:50:57.837493    8893 out.go:358] Setting ErrFile to fd 2...
	I0920 10:50:57.837495    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:50:57.837620    8893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:50:57.838669    8893 out.go:352] Setting JSON to false
	I0920 10:50:57.855378    8893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4820,"bootTime":1726849837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:50:57.855457    8893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:50:57.859872    8893 out.go:177] * [running-upgrade-568000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:50:57.867768    8893 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:50:57.867810    8893 notify.go:220] Checking for updates...
	I0920 10:50:57.875710    8893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:50:57.879689    8893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:50:57.882775    8893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:50:57.885715    8893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:50:57.888690    8893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:50:57.892062    8893 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:50:57.895633    8893 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:50:57.898693    8893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:50:57.902755    8893 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:50:57.909644    8893 start.go:297] selected driver: qemu2
	I0920 10:50:57.909649    8893 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:50:57.909702    8893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:50:57.912230    8893 cni.go:84] Creating CNI manager for ""
	I0920 10:50:57.912264    8893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:50:57.912293    8893 start.go:340] cluster config:
	{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:50:57.912344    8893 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:50:57.919736    8893 out.go:177] * Starting "running-upgrade-568000" primary control-plane node in "running-upgrade-568000" cluster
	I0920 10:50:57.923687    8893 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:50:57.923707    8893 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:50:57.923714    8893 cache.go:56] Caching tarball of preloaded images
	I0920 10:50:57.923777    8893 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:50:57.923782    8893 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:50:57.923830    8893 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/config.json ...
	I0920 10:50:57.924347    8893 start.go:360] acquireMachinesLock for running-upgrade-568000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:50:57.924383    8893 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "running-upgrade-568000"
	I0920 10:50:57.924393    8893 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:50:57.924397    8893 fix.go:54] fixHost starting: 
	I0920 10:50:57.925087    8893 fix.go:112] recreateIfNeeded on running-upgrade-568000: state=Running err=<nil>
	W0920 10:50:57.925096    8893 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:50:57.929708    8893 out.go:177] * Updating the running qemu2 "running-upgrade-568000" VM ...
	I0920 10:50:57.936623    8893 machine.go:93] provisionDockerMachine start ...
	I0920 10:50:57.936666    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:57.936773    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:57.936778    8893 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:50:58.011184    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0920 10:50:58.011196    8893 buildroot.go:166] provisioning hostname "running-upgrade-568000"
	I0920 10:50:58.011251    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.011360    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.011367    8893 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-568000 && echo "running-upgrade-568000" | sudo tee /etc/hostname
	I0920 10:50:58.090584    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0920 10:50:58.090643    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.090757    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.090766    8893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-568000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-568000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-568000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:50:58.162543    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:50:58.162554    8893 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19678-6679/.minikube CaCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19678-6679/.minikube}
	I0920 10:50:58.162560    8893 buildroot.go:174] setting up certificates
	I0920 10:50:58.162565    8893 provision.go:84] configureAuth start
	I0920 10:50:58.162569    8893 provision.go:143] copyHostCerts
	I0920 10:50:58.162648    8893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem, removing ...
	I0920 10:50:58.162653    8893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem
	I0920 10:50:58.162821    8893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem (1123 bytes)
	I0920 10:50:58.162999    8893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem, removing ...
	I0920 10:50:58.163003    8893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem
	I0920 10:50:58.163058    8893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem (1675 bytes)
	I0920 10:50:58.163166    8893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem, removing ...
	I0920 10:50:58.163169    8893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem
	I0920 10:50:58.163219    8893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem (1078 bytes)
	I0920 10:50:58.163304    8893 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-568000 san=[127.0.0.1 localhost minikube running-upgrade-568000]
	I0920 10:50:58.322307    8893 provision.go:177] copyRemoteCerts
	I0920 10:50:58.322360    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:50:58.322369    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:50:58.362784    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:50:58.369734    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:50:58.377142    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:50:58.383655    8893 provision.go:87] duration metric: took 221.078875ms to configureAuth
	I0920 10:50:58.383664    8893 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:50:58.383772    8893 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:50:58.383809    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.383893    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.383898    8893 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:50:58.455687    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:50:58.455697    8893 buildroot.go:70] root file system type: tmpfs
	I0920 10:50:58.455751    8893 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:50:58.455807    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.455922    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.455955    8893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:50:58.531911    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:50:58.531984    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.532100    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.532108    8893 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:50:58.606893    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:50:58.606904    8893 machine.go:96] duration metric: took 670.276292ms to provisionDockerMachine
	I0920 10:50:58.606909    8893 start.go:293] postStartSetup for "running-upgrade-568000" (driver="qemu2")
	I0920 10:50:58.606915    8893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:50:58.606984    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:50:58.606993    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:50:58.646169    8893 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:50:58.647477    8893 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:50:58.647488    8893 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/addons for local assets ...
	I0920 10:50:58.647589    8893 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/files for local assets ...
	I0920 10:50:58.647714    8893 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem -> 71912.pem in /etc/ssl/certs
	I0920 10:50:58.647850    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:50:58.650334    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:50:58.657232    8893 start.go:296] duration metric: took 50.318541ms for postStartSetup
	I0920 10:50:58.657250    8893 fix.go:56] duration metric: took 732.857917ms for fixHost
	I0920 10:50:58.657287    8893 main.go:141] libmachine: Using SSH client type: native
	I0920 10:50:58.657418    8893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008d5c00] 0x1008d8440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:50:58.657425    8893 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:50:58.726762    8893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854658.268641889
	
	I0920 10:50:58.726769    8893 fix.go:216] guest clock: 1726854658.268641889
	I0920 10:50:58.726775    8893 fix.go:229] Guest: 2024-09-20 10:50:58.268641889 -0700 PDT Remote: 2024-09-20 10:50:58.657252 -0700 PDT m=+0.840407293 (delta=-388.610111ms)
	I0920 10:50:58.726786    8893 fix.go:200] guest clock delta is within tolerance: -388.610111ms
	I0920 10:50:58.726789    8893 start.go:83] releasing machines lock for "running-upgrade-568000", held for 802.4055ms
	I0920 10:50:58.726851    8893 ssh_runner.go:195] Run: cat /version.json
	I0920 10:50:58.726861    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:50:58.726851    8893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:50:58.726896    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	W0920 10:50:58.727514    8893 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51261: connect: connection refused
	I0920 10:50:58.727538    8893 retry.go:31] will retry after 341.450773ms: dial tcp [::1]:51261: connect: connection refused
	W0920 10:50:58.764256    8893 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:50:58.764324    8893 ssh_runner.go:195] Run: systemctl --version
	I0920 10:50:58.766064    8893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:50:58.767715    8893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:50:58.767745    8893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:50:58.770320    8893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:50:58.774537    8893 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:50:58.774545    8893 start.go:495] detecting cgroup driver to use...
	I0920 10:50:58.774654    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:50:58.779874    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:50:58.782719    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:50:58.786105    8893 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:50:58.786128    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:50:58.789485    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:50:58.792988    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:50:58.796247    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:50:58.799115    8893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:50:58.802192    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:50:58.805792    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:50:58.808636    8893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:50:58.811555    8893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:50:58.818330    8893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:50:58.821131    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:50:58.910158    8893 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:50:58.917106    8893 start.go:495] detecting cgroup driver to use...
	I0920 10:50:58.917180    8893 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:50:58.925147    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:50:58.929838    8893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:50:58.936746    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:50:58.941279    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:50:58.946110    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:50:58.951684    8893 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:50:58.953018    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:50:58.955670    8893 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:50:58.960431    8893 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:50:59.050672    8893 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:50:59.139581    8893 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:50:59.139635    8893 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:50:59.144719    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:50:59.241247    8893 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:51:15.736085    8893 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.494908791s)
	I0920 10:51:15.736167    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:51:15.741024    8893 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:51:15.749746    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:51:15.755268    8893 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:51:15.827591    8893 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:51:15.907899    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:51:15.985413    8893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:51:15.991733    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:51:15.996114    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:51:16.064366    8893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:51:16.103157    8893 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:51:16.103262    8893 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:51:16.105450    8893 start.go:563] Will wait 60s for crictl version
	I0920 10:51:16.105517    8893 ssh_runner.go:195] Run: which crictl
	I0920 10:51:16.106792    8893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:51:16.120060    8893 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:51:16.120142    8893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:51:16.133070    8893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:51:16.153408    8893 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:51:16.153547    8893 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:51:16.155010    8893 kubeadm.go:883] updating cluster {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:51:16.155055    8893 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:51:16.155105    8893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:51:16.165353    8893 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:51:16.165361    8893 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:51:16.165410    8893 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:51:16.168295    8893 ssh_runner.go:195] Run: which lz4
	I0920 10:51:16.169613    8893 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:51:16.170776    8893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:51:16.170790    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:51:17.156706    8893 docker.go:649] duration metric: took 987.146583ms to copy over tarball
	I0920 10:51:17.156768    8893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:51:18.471254    8893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.314480125s)
	I0920 10:51:18.471267    8893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:51:18.488815    8893 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:51:18.492084    8893 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:51:18.497058    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:51:18.565309    8893 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:51:19.782749    8893 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.217431875s)
	I0920 10:51:19.783059    8893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:51:19.810901    8893 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:51:19.810910    8893 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:51:19.810915    8893 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:51:19.814955    8893 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:51:19.816773    8893 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:51:19.818764    8893 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:51:19.818807    8893 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:51:19.820906    8893 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:51:19.821271    8893 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:51:19.822784    8893 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:51:19.823014    8893 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:51:19.824269    8893 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:51:19.824269    8893 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:51:19.825409    8893 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:51:19.825708    8893 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:51:19.826285    8893 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:51:19.826531    8893 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:51:19.827960    8893 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:51:19.828018    8893 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:51:20.226782    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:51:20.240707    8893 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:51:20.240737    8893 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:51:20.240804    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:51:20.252312    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:51:20.254293    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:51:20.264996    8893 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:51:20.265019    8893 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:51:20.265094    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:51:20.272703    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:51:20.277699    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:51:20.283769    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:51:20.287926    8893 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:51:20.287944    8893 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:51:20.287992    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:51:20.291927    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:51:20.295775    8893 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:51:20.295798    8893 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:51:20.295847    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:51:20.300766    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:51:20.308648    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:51:20.310189    8893 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:51:20.310217    8893 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:51:20.310258    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:51:20.316493    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:51:20.326125    8893 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:51:20.326146    8893 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:51:20.326130    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:51:20.326203    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:51:20.326276    8893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:51:20.336471    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:51:20.336510    8893 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:51:20.336524    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:51:20.336591    8893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:51:20.339870    8893 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:51:20.339885    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0920 10:51:20.347025    8893 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:51:20.347182    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:51:20.349978    8893 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:51:20.349987    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:51:20.400724    8893 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:51:20.400749    8893 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:51:20.400821    8893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:51:20.445204    8893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:51:20.463413    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:51:20.463570    8893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:51:20.480201    8893 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:51:20.480230    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:51:20.583311    8893 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:51:20.583324    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0920 10:51:20.604384    8893 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:51:20.604524    8893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:51:20.686650    8893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:51:20.686671    8893 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:51:20.686677    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:51:20.686689    8893 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:51:20.686712    8893 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:51:20.686777    8893 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:51:20.828475    8893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:51:21.605622    8893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:51:21.605948    8893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:51:21.610467    8893 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:51:21.610542    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:51:21.670536    8893 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:51:21.670552    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:51:21.901250    8893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:51:21.901285    8893 cache_images.go:92] duration metric: took 2.090373416s to LoadCachedImages
	W0920 10:51:21.901322    8893 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0920 10:51:21.901333    8893 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:51:21.901383    8893 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-568000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:51:21.901457    8893 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:51:21.915643    8893 cni.go:84] Creating CNI manager for ""
	I0920 10:51:21.915660    8893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:51:21.915665    8893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:51:21.915674    8893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-568000 NodeName:running-upgrade-568000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:51:21.915743    8893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-568000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:51:21.915809    8893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:51:21.918896    8893 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:51:21.918931    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:51:21.921963    8893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:51:21.927094    8893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:51:21.932119    8893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:51:21.937531    8893 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:51:21.938935    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:51:22.021093    8893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:51:22.026851    8893 certs.go:68] Setting up /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000 for IP: 10.0.2.15
	I0920 10:51:22.026875    8893 certs.go:194] generating shared ca certs ...
	I0920 10:51:22.026889    8893 certs.go:226] acquiring lock for ca certs: {Name:mkeda31d83c21edf6ebc3767ef11bc03f6f18a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:51:22.027117    8893 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key
	I0920 10:51:22.027174    8893 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key
	I0920 10:51:22.027179    8893 certs.go:256] generating profile certs ...
	I0920 10:51:22.027252    8893 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.key
	I0920 10:51:22.027265    8893 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092
	I0920 10:51:22.027276    8893 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:51:22.101055    8893 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 ...
	I0920 10:51:22.101060    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092: {Name:mk0a01ffa4b0f41830ab7fc0a2abf89a69e27f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:51:22.101294    8893 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 ...
	I0920 10:51:22.101298    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092: {Name:mk6df649ab227ace7a1f2cff41bb1bc597b4cdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:51:22.101419    8893 certs.go:381] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt
	I0920 10:51:22.101620    8893 certs.go:385] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key
	I0920 10:51:22.101791    8893 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/proxy-client.key
	I0920 10:51:22.101924    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem (1338 bytes)
	W0920 10:51:22.101954    8893 certs.go:480] ignoring /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191_empty.pem, impossibly tiny 0 bytes
	I0920 10:51:22.101960    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:51:22.101986    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:51:22.102016    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:51:22.102043    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem (1675 bytes)
	I0920 10:51:22.102097    8893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:51:22.102490    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:51:22.110015    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:51:22.117525    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:51:22.125112    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:51:22.132109    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:51:22.138750    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:51:22.145676    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:51:22.152957    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 10:51:22.160188    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /usr/share/ca-certificates/71912.pem (1708 bytes)
	I0920 10:51:22.167957    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:51:22.174498    8893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem --> /usr/share/ca-certificates/7191.pem (1338 bytes)
	I0920 10:51:22.181749    8893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:51:22.187024    8893 ssh_runner.go:195] Run: openssl version
	I0920 10:51:22.188848    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:51:22.191790    8893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:51:22.193191    8893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:50 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:51:22.193219    8893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:51:22.195001    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:51:22.198029    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7191.pem && ln -fs /usr/share/ca-certificates/7191.pem /etc/ssl/certs/7191.pem"
	I0920 10:51:22.201409    8893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7191.pem
	I0920 10:51:22.202832    8893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:39 /usr/share/ca-certificates/7191.pem
	I0920 10:51:22.202859    8893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7191.pem
	I0920 10:51:22.204691    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7191.pem /etc/ssl/certs/51391683.0"
	I0920 10:51:22.208370    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71912.pem && ln -fs /usr/share/ca-certificates/71912.pem /etc/ssl/certs/71912.pem"
	I0920 10:51:22.211366    8893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71912.pem
	I0920 10:51:22.212762    8893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:39 /usr/share/ca-certificates/71912.pem
	I0920 10:51:22.212785    8893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71912.pem
	I0920 10:51:22.214714    8893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71912.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:51:22.217874    8893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:51:22.219617    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:51:22.221549    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:51:22.223465    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:51:22.225318    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:51:22.227341    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:51:22.229121    8893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:51:22.231073    8893 kubeadm.go:392] StartCluster: {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:51:22.231143    8893 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:51:22.241813    8893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:51:22.245547    8893 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:51:22.245557    8893 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:51:22.245587    8893 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:51:22.248788    8893 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:51:22.248825    8893 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-568000" does not appear in /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:51:22.248839    8893 kubeconfig.go:62] /Users/jenkins/minikube-integration/19678-6679/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-568000" cluster setting kubeconfig missing "running-upgrade-568000" context setting]
	I0920 10:51:22.249009    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:51:22.249922    8893 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eae030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:51:22.250830    8893 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:51:22.254010    8893 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-568000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:51:22.254016    8893 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:51:22.254066    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:51:22.265577    8893 docker.go:483] Stopping containers: [9be74b413586 31802631b823 7580bd5f450d 30d9b98333f5 4b8eb70b9849 b4c22c2c0bed dc2862ede330 ea5e144b58ef 26792109aa9e 9295fc1e3f2a 433240456b35 806b3c49d615 702c2c50a0f6 2d6a956de515 ba1756fa8f69]
	I0920 10:51:22.265660    8893 ssh_runner.go:195] Run: docker stop 9be74b413586 31802631b823 7580bd5f450d 30d9b98333f5 4b8eb70b9849 b4c22c2c0bed dc2862ede330 ea5e144b58ef 26792109aa9e 9295fc1e3f2a 433240456b35 806b3c49d615 702c2c50a0f6 2d6a956de515 ba1756fa8f69
	I0920 10:51:22.276809    8893 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:51:22.359379    8893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:51:22.363099    8893 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 20 17:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 20 17:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 20 17:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 20 17:50 /etc/kubernetes/scheduler.conf
	
	I0920 10:51:22.363129    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:51:22.366443    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:51:22.366479    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:51:22.369894    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:51:22.373214    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:51:22.373244    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:51:22.376256    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:51:22.378761    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:51:22.378784    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:51:22.381660    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:51:22.384457    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:51:22.384491    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:51:22.387035    8893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:51:22.390063    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:51:22.410986    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:51:22.965698    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:51:23.262353    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:51:23.290762    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:51:23.314307    8893 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:51:23.314392    8893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:51:23.816778    8893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:51:24.316457    8893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:51:24.321401    8893 api_server.go:72] duration metric: took 1.007101s to wait for apiserver process to appear ...
	I0920 10:51:24.321409    8893 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:51:24.321418    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:29.323524    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:29.323594    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:34.324110    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:34.324214    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:39.325312    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:39.325363    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:44.326356    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:44.326448    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:49.328179    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:49.328284    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:54.330915    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:54.331015    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:59.332919    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:59.333025    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:04.335833    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:04.335935    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:09.338662    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:09.338755    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:14.341543    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:14.341641    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:19.344461    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:19.344560    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:24.345964    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:24.346229    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:24.377234    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:52:24.377347    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:24.390643    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:52:24.390731    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:24.401813    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:52:24.401902    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:24.412311    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:52:24.412401    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:24.422793    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:52:24.422873    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:24.432922    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:52:24.432998    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:24.443174    8893 logs.go:276] 0 containers: []
	W0920 10:52:24.443187    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:24.443260    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:24.454102    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:52:24.454134    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:52:24.454139    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:52:24.465327    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:24.465339    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:24.489780    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:52:24.489786    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:52:24.503506    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:52:24.503518    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:52:24.516812    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:52:24.516823    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:52:24.556112    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:52:24.556121    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:52:24.574657    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:52:24.574666    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:52:24.593259    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:52:24.593268    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:52:24.614352    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:52:24.614362    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:52:24.629595    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:24.629607    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:24.664736    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:52:24.664743    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:52:24.676638    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:24.676648    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:24.748485    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:52:24.748497    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:52:24.762918    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:52:24.762929    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:52:24.774902    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:52:24.774915    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:52:24.786297    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:52:24.786307    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:24.799372    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:24.799381    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:27.304372    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:32.307163    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:32.307470    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:32.333380    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:52:32.333528    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:32.352572    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:52:32.352673    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:32.367855    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:52:32.367947    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:32.378833    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:52:32.378918    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:32.389241    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:52:32.389311    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:32.400113    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:52:32.400220    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:32.410380    8893 logs.go:276] 0 containers: []
	W0920 10:52:32.410390    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:32.410454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:32.421029    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:52:32.421045    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:52:32.421049    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:52:32.432512    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:32.432522    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:32.469276    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:32.469286    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:32.503680    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:52:32.503689    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:52:32.540583    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:52:32.540595    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:52:32.558375    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:52:32.558387    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:52:32.573450    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:52:32.573461    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:52:32.591689    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:32.591702    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:32.596229    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:52:32.596238    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:52:32.607841    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:52:32.607854    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:52:32.620833    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:52:32.620847    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:32.632495    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:52:32.632507    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:52:32.646380    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:52:32.646390    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:52:32.660207    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:52:32.660219    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:52:32.671692    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:52:32.671705    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:52:32.683305    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:52:32.683317    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:52:32.694849    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:32.694860    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:35.221405    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:40.224215    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:40.224781    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:40.262770    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:52:40.262934    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:40.286203    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:52:40.286358    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:40.302338    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:52:40.302435    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:40.314853    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:52:40.314929    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:40.330016    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:52:40.330101    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:40.342149    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:52:40.342228    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:40.352348    8893 logs.go:276] 0 containers: []
	W0920 10:52:40.352358    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:40.352430    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:40.373009    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:52:40.373026    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:40.373031    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:40.410549    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:40.410559    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:40.415432    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:52:40.415437    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:52:40.452581    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:52:40.452592    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:52:40.467336    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:52:40.467350    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:52:40.486143    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:52:40.486156    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:52:40.503372    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:52:40.503381    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:52:40.515277    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:40.515292    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:40.541665    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:52:40.541674    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:52:40.554924    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:52:40.554935    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:52:40.566847    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:52:40.566860    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:52:40.587483    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:52:40.587495    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:52:40.601678    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:52:40.601689    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:52:40.612908    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:40.612919    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:40.651427    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:52:40.651438    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:52:40.665616    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:52:40.665629    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:52:40.676876    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:52:40.676888    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:43.190506    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:48.193281    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:48.193786    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:48.228147    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:52:48.228308    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:48.249418    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:52:48.249551    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:48.264200    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:52:48.264291    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:48.276126    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:52:48.276207    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:48.286397    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:52:48.286469    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:48.297356    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:52:48.297454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:48.308133    8893 logs.go:276] 0 containers: []
	W0920 10:52:48.308144    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:48.308207    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:48.318743    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:52:48.318761    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:52:48.318766    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:52:48.336014    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:52:48.336025    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:52:48.347354    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:52:48.347366    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:52:48.363869    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:48.363880    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:48.390199    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:48.390206    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:48.428364    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:52:48.428384    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:52:48.442604    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:52:48.442614    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:52:48.480158    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:52:48.480173    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:52:48.497020    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:52:48.497033    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:52:48.510906    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:48.510917    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:48.545538    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:52:48.545552    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:52:48.559604    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:52:48.559614    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:52:48.577646    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:52:48.577657    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:48.590342    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:48.590352    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:48.595057    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:52:48.595064    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:52:48.607168    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:52:48.607178    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:52:48.618150    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:52:48.618159    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:52:51.131288    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:56.134101    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:56.134611    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:56.170644    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:52:56.170792    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:56.191012    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:52:56.191134    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:56.209203    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:52:56.209281    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:56.220583    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:52:56.220666    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:56.230791    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:52:56.230859    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:56.245137    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:52:56.245222    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:56.255468    8893 logs.go:276] 0 containers: []
	W0920 10:52:56.255480    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:56.255554    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:56.265943    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:52:56.265961    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:56.265967    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:56.303573    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:52:56.303584    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:56.315906    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:52:56.315914    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:52:56.333032    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:52:56.333045    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:52:56.346174    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:52:56.346188    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:52:56.358058    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:56.358073    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:56.362788    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:52:56.362796    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:52:56.377219    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:52:56.377231    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:52:56.388959    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:52:56.388973    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:52:56.400072    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:56.400082    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:56.435034    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:52:56.435043    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:52:56.450042    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:52:56.450054    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:52:56.464889    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:52:56.464899    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:52:56.487095    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:56.487105    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:56.511850    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:52:56.511859    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:52:56.548791    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:52:56.548804    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:52:56.562347    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:52:56.562360    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:52:59.082435    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:04.085210    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:04.085306    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:04.096933    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:04.096995    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:04.108501    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:04.108584    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:04.119796    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:04.119878    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:04.131010    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:04.131086    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:04.141313    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:04.141393    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:04.151979    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:04.152056    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:04.162104    8893 logs.go:276] 0 containers: []
	W0920 10:53:04.162115    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:04.162182    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:04.172489    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:04.172506    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:04.172511    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:04.189942    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:04.189953    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:04.194270    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:04.194280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:04.231521    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:04.231535    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:04.249319    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:04.249329    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:04.263899    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:04.263910    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:04.280869    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:04.280879    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:04.292334    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:04.292343    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:04.303580    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:04.303591    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:04.314968    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:04.314980    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:04.326249    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:04.326264    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:04.337277    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:04.337289    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:04.361346    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:04.361358    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:04.399364    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:04.399373    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:04.413465    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:04.413475    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:04.450112    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:04.450122    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:04.461793    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:04.461805    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:06.973878    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:11.976770    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:11.977296    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:12.017843    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:12.018000    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:12.042256    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:12.042357    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:12.056346    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:12.056420    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:12.067670    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:12.067743    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:12.078433    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:12.078514    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:12.095079    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:12.095149    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:12.105487    8893 logs.go:276] 0 containers: []
	W0920 10:53:12.105499    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:12.105567    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:12.115885    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:12.115906    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:12.115912    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:12.129760    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:12.129771    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:12.147304    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:12.147313    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:12.158537    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:12.158548    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:12.175411    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:12.175422    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:12.201418    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:12.201424    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:12.213033    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:12.213047    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:12.226656    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:12.226665    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:12.244395    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:12.244406    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:12.259363    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:12.259374    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:12.263801    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:12.263812    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:12.299256    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:12.299270    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:12.339355    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:12.339365    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:12.351448    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:12.351457    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:12.369362    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:12.369373    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:12.381442    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:12.381450    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:12.418970    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:12.418989    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:14.932626    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:19.935389    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:19.935914    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:19.977797    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:19.977955    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:19.999789    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:19.999909    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:20.015933    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:20.016011    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:20.030565    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:20.030655    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:20.041592    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:20.041676    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:20.051744    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:20.051822    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:20.061908    8893 logs.go:276] 0 containers: []
	W0920 10:53:20.061919    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:20.061987    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:20.073355    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:20.073375    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:20.073380    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:20.085039    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:20.085052    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:20.096290    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:20.096304    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:20.122626    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:20.122635    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:20.165505    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:20.165516    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:20.183100    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:20.183115    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:20.218142    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:20.218152    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:20.233835    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:20.233849    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:20.251487    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:20.251497    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:20.262936    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:20.262945    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:20.278123    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:20.278136    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:20.290174    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:20.290186    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:20.302147    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:20.302162    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:20.338600    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:20.338608    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:20.343324    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:20.343332    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:20.357912    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:20.357924    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:20.370747    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:20.370758    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:22.883373    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:27.885607    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:27.886051    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:27.933636    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:27.933754    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:27.957521    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:27.957611    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:27.975918    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:27.975996    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:27.987770    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:27.987849    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:28.001371    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:28.001459    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:28.012416    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:28.012490    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:28.023939    8893 logs.go:276] 0 containers: []
	W0920 10:53:28.023952    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:28.024012    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:28.034530    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:28.034553    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:28.034559    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:28.049736    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:28.049746    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:28.061691    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:28.061705    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:28.075658    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:28.075668    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:28.114241    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:28.114251    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:28.131859    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:28.131869    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:28.149158    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:28.149167    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:28.153340    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:28.153348    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:28.187706    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:28.187719    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:28.198986    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:28.198995    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:28.210934    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:28.210943    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:28.246493    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:28.246505    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:28.257765    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:28.257777    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:28.269301    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:28.269311    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:28.280804    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:28.280814    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:28.298686    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:28.298696    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:28.310015    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:28.310026    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:30.837687    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:35.839924    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:35.840077    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:35.852593    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:35.852683    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:35.863975    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:35.864066    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:35.874697    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:35.874778    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:35.887210    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:35.887301    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:35.898252    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:35.898337    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:35.909528    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:35.909612    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:35.921792    8893 logs.go:276] 0 containers: []
	W0920 10:53:35.921808    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:35.921884    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:35.933542    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:35.933562    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:35.933569    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:35.949412    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:35.949432    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:35.962933    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:35.962948    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:35.976016    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:35.976027    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:36.014075    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:36.014096    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:36.019280    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:36.019293    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:36.062888    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:36.062907    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:36.079582    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:36.079596    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:36.099784    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:36.099800    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:36.114118    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:36.114133    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:36.143250    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:36.143280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:36.157048    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:36.157065    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:36.199676    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:36.199704    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:36.217232    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:36.217252    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:36.235247    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:36.235259    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:36.256057    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:36.256082    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:36.276032    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:36.276049    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:38.790624    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:43.792889    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:43.793124    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:43.804574    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:43.804668    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:43.815860    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:43.815935    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:43.830406    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:43.830481    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:43.841780    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:43.841859    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:43.854363    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:43.854442    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:43.864831    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:43.864906    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:43.875251    8893 logs.go:276] 0 containers: []
	W0920 10:53:43.875264    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:43.875337    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:43.885976    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:43.885999    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:43.886006    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:43.911087    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:43.911099    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:43.927327    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:43.927336    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:43.952632    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:43.952638    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:43.989319    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:43.989329    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:44.024779    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:44.024790    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:44.039344    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:44.039354    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:44.051064    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:44.051075    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:44.088807    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:44.088820    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:44.100558    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:44.100571    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:44.118984    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:44.118997    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:44.132502    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:44.132513    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:44.143855    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:44.143865    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:44.156033    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:44.156044    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:44.167610    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:44.167620    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:44.172536    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:44.172544    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:44.184223    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:44.184236    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:46.699154    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:51.701322    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:51.701446    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:51.716518    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:51.716600    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:51.728453    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:51.728536    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:51.740709    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:51.740796    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:51.752522    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:51.752613    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:51.773945    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:51.774029    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:51.787282    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:51.787372    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:51.798962    8893 logs.go:276] 0 containers: []
	W0920 10:53:51.798974    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:51.799050    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:51.811172    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:51.811192    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:51.811198    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:51.824234    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:51.824247    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:51.837008    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:51.837020    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:51.851650    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:51.851663    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:53:51.869979    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:51.869989    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:51.887825    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:51.887841    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:51.899463    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:53:51.899474    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:53:51.911085    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:51.911096    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:51.935840    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:51.935849    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:51.940164    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:51.940174    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:51.977733    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:51.977748    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:51.995457    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:53:51.995468    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:53:52.010752    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:52.010762    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:52.049261    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:52.049272    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:52.089395    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:52.089411    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:52.101412    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:52.101426    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:52.117565    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:52.117576    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:54.631763    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:59.634071    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:59.634198    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:59.646327    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:53:59.646417    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:59.659052    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:53:59.659140    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:59.671059    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:53:59.671147    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:59.682377    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:53:59.682461    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:59.693267    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:53:59.693350    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:59.704287    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:53:59.704363    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:59.714661    8893 logs.go:276] 0 containers: []
	W0920 10:53:59.714675    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:59.714745    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:59.725601    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:53:59.725621    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:53:59.725626    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:53:59.743372    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:53:59.743385    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:53:59.755086    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:53:59.755098    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:53:59.766780    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:53:59.766789    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:53:59.778912    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:53:59.778922    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:53:59.796755    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:53:59.796764    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:53:59.808383    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:53:59.808396    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:59.820833    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:59.820842    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:59.858995    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:59.859010    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:59.863991    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:59.863997    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:59.901667    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:53:59.901680    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:53:59.915207    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:59.915215    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:59.938642    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:53:59.938648    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:53:59.958241    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:53:59.958255    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:53:59.997167    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:53:59.997181    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:00.010971    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:00.010983    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:00.026555    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:00.026567    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:02.539498    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:07.541751    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:07.542051    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:07.568841    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:07.568998    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:07.586166    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:07.586287    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:07.600371    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:07.600457    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:07.612509    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:07.612584    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:07.623392    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:07.623464    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:07.634443    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:07.634531    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:07.644437    8893 logs.go:276] 0 containers: []
	W0920 10:54:07.644450    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:07.644516    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:07.655084    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:07.655100    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:07.655105    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:07.668873    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:07.668886    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:07.685863    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:07.685873    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:07.697321    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:07.697331    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:07.708778    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:07.708788    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:07.723800    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:07.723810    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:07.741910    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:07.741923    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:07.764761    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:07.764767    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:07.776513    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:07.776522    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:07.781038    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:07.781045    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:07.815944    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:07.815958    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:07.853518    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:07.853529    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:07.865611    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:07.865623    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:07.876787    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:07.876797    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:07.913034    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:07.913041    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:07.934167    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:07.934177    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:07.945556    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:07.945565    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:10.457880    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:15.459675    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:15.460292    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:15.496604    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:15.496772    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:15.517086    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:15.517206    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:15.531978    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:15.532075    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:15.544194    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:15.544281    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:15.555170    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:15.555258    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:15.565732    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:15.565814    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:15.576149    8893 logs.go:276] 0 containers: []
	W0920 10:54:15.576164    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:15.576226    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:15.586365    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:15.586383    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:15.586388    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:15.601654    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:15.601666    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:15.614658    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:15.614671    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:15.653493    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:15.653510    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:15.692454    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:15.692462    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:15.716818    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:15.716829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:15.734981    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:15.734989    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:15.746807    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:15.746817    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:15.758682    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:15.758693    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:15.763034    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:15.763044    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:15.800980    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:15.800992    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:15.816873    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:15.816888    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:15.834734    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:15.834748    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:15.848359    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:15.848370    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:15.860240    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:15.860250    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:15.880288    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:15.880306    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:15.893634    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:15.893647    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:18.408933    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:23.411223    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:23.411358    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:23.424936    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:23.425015    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:23.436411    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:23.436480    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:23.447101    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:23.447169    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:23.457593    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:23.457681    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:23.467849    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:23.467931    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:23.478997    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:23.479082    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:23.489823    8893 logs.go:276] 0 containers: []
	W0920 10:54:23.489836    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:23.489910    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:23.500343    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:23.500362    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:23.500368    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:23.538740    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:23.538748    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:23.552767    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:23.552777    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:23.563905    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:23.563915    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:23.599879    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:23.599892    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:23.639724    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:23.639737    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:23.651322    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:23.651333    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:23.675091    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:23.675098    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:23.688826    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:23.688841    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:23.701068    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:23.701079    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:23.714505    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:23.714515    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:23.719050    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:23.719059    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:23.736972    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:23.736983    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:23.748291    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:23.748301    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:23.766348    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:23.766358    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:23.781890    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:23.781900    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:23.799322    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:23.799331    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:26.313488    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:31.315815    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:31.316412    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:31.351587    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:31.351748    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:31.382878    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:31.382980    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:31.396341    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:31.396413    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:31.408008    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:31.408079    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:31.418613    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:31.418695    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:31.428770    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:31.428848    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:31.439519    8893 logs.go:276] 0 containers: []
	W0920 10:54:31.439537    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:31.439601    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:31.450461    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:31.450477    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:31.450482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:31.465075    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:31.465088    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:31.480856    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:31.480868    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:31.493011    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:31.493025    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:31.497251    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:31.497261    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:31.532547    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:31.532559    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:31.546824    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:31.546833    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:31.557682    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:31.557694    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:31.569523    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:31.569533    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:31.581922    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:31.581933    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:31.606849    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:31.606862    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:31.619163    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:31.619176    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:31.654405    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:31.654411    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:31.675064    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:31.675073    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:31.686381    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:31.686389    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:31.703563    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:31.703574    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:31.715251    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:31.715260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:34.254063    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:39.254941    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:39.255027    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:39.266623    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:39.266706    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:39.280508    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:39.280566    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:39.295105    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:39.295197    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:39.307114    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:39.307201    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:39.319172    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:39.319268    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:39.331681    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:39.331770    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:39.343760    8893 logs.go:276] 0 containers: []
	W0920 10:54:39.343772    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:39.343845    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:39.355884    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:39.355903    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:39.355909    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:39.380681    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:39.380692    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:39.397131    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:39.397145    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:39.425862    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:39.425875    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:39.438621    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:39.438633    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:39.452076    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:39.452090    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:39.465023    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:39.465036    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:39.469711    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:39.469721    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:39.508296    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:39.508310    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:39.528027    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:39.528039    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:39.541072    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:39.541084    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:39.554339    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:39.554351    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:39.594071    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:39.594087    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:39.606787    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:39.606799    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:39.622988    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:39.622998    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:39.664187    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:39.664204    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:39.680070    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:39.680084    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:42.207373    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:47.210041    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:47.210148    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:47.222234    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:47.222319    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:47.238167    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:47.238251    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:47.252657    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:47.252738    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:47.292204    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:47.292294    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:47.307354    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:47.307444    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:47.319361    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:47.319443    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:47.331593    8893 logs.go:276] 0 containers: []
	W0920 10:54:47.331607    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:47.331684    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:47.342544    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:47.342562    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:47.342569    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:47.354236    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:47.354248    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:47.370842    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:47.370854    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:47.383717    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:47.383728    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:47.396532    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:47.396546    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:47.415004    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:47.415016    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:47.427004    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:47.427015    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:47.438572    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:47.438583    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:47.452692    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:47.452707    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:47.464751    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:47.464764    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:47.476659    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:47.476670    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:47.501237    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:47.501251    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:47.538127    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:47.538138    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:47.542432    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:47.542438    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:47.580378    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:47.580391    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:47.594818    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:47.594829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:47.612169    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:47.612179    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:50.152295    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:55.154308    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:55.154533    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:55.172856    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:55.172967    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:55.186751    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:55.186831    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:55.198249    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:55.198336    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:55.208961    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:55.209041    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:55.219518    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:55.219604    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:55.230468    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:55.230550    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:55.240343    8893 logs.go:276] 0 containers: []
	W0920 10:54:55.240355    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:55.240427    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:55.251235    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:55.251253    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:55.251260    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:55.288255    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:55.288267    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:55.302669    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:55.302683    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:55.313411    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:55.313422    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:55.331092    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:55.331101    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:55.345715    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:55.345730    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:55.360925    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:55.360937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:55.377034    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:55.377043    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:55.401432    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:55.401447    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:55.413317    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:55.413330    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:55.417463    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:55.417470    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:55.454973    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:55.454983    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:55.472282    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:55.472297    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:55.490510    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:55.490520    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:55.502771    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:55.502787    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:55.540280    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:55.540287    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:55.552124    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:55.552138    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:58.065574    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:03.067758    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:03.067888    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:03.079384    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:03.079476    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:03.091303    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:03.091387    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:03.102637    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:03.102717    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:03.115691    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:03.115782    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:03.127394    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:03.127482    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:03.137855    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:03.137937    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:03.147598    8893 logs.go:276] 0 containers: []
	W0920 10:55:03.147612    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:03.147692    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:03.158175    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:03.158195    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:03.158200    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:03.197245    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:03.197260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:03.212607    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:03.212619    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:03.224557    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:03.224568    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:03.236099    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:03.236113    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:03.250932    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:03.250943    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:03.268701    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:03.268717    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:03.284170    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:03.284188    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:03.297291    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:03.297304    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:03.319860    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:03.319873    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:03.331875    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:03.331886    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:03.354697    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:03.354705    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:03.358771    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:03.358777    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:03.403490    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:03.403505    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:03.415632    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:03.415644    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:03.427700    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:03.427711    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:03.464958    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:03.464972    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:05.979537    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:10.981912    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:10.982151    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:11.004264    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:11.004365    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:11.018668    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:11.018763    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:11.037106    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:11.037180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:11.054587    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:11.054672    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:11.065019    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:11.065096    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:11.075570    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:11.075641    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:11.085996    8893 logs.go:276] 0 containers: []
	W0920 10:55:11.086007    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:11.086083    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:11.096880    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:11.096900    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:11.096907    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:11.101540    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:11.101549    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:11.115293    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:11.115303    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:11.152122    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:11.152132    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:11.163485    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:11.163497    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:11.176413    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:11.176425    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:11.199336    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:11.199343    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:11.213290    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:11.213301    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:11.225721    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:11.225736    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:11.262532    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:11.262551    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:11.277574    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:11.277588    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:11.290566    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:11.290578    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:11.302929    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:11.302945    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:11.337347    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:11.337360    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:11.355652    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:11.355667    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:11.377677    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:11.377693    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:11.389838    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:11.389850    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:13.912465    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:18.913696    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:18.913894    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:18.932242    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:18.932347    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:18.945609    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:18.945703    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:18.957348    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:18.957421    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:18.969324    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:18.969401    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:18.980364    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:18.980448    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:18.991887    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:18.991969    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:19.002923    8893 logs.go:276] 0 containers: []
	W0920 10:55:19.002934    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:19.003005    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:19.013757    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:19.013774    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:19.013779    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:19.050073    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:19.050081    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:19.063990    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:19.064003    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:19.081628    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:19.081638    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:19.099410    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:19.099425    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:19.122974    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:19.122982    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:19.127203    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:19.127210    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:19.164924    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:19.164935    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:19.176565    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:19.176578    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:19.187984    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:19.187995    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:19.199739    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:19.199750    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:19.219850    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:19.219860    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:19.257180    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:19.257195    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:19.275476    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:19.275491    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:19.287484    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:19.287498    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:19.303168    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:19.303182    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:19.315967    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:19.315978    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:21.831050    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:26.833520    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:26.833691    8893 kubeadm.go:597] duration metric: took 4m4.589419334s to restartPrimaryControlPlane
	W0920 10:55:26.833829    8893 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:55:26.833880    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:55:27.858521    8893 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.024631208s)
	I0920 10:55:27.858586    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:55:27.863615    8893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:55:27.866497    8893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:55:27.869055    8893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:55:27.869060    8893 kubeadm.go:157] found existing configuration files:
	
	I0920 10:55:27.869084    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:55:27.871769    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:55:27.871798    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:55:27.874492    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:55:27.877177    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:55:27.877198    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:55:27.880753    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:55:27.883737    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:55:27.883760    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:55:27.886281    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:55:27.888984    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:55:27.889012    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:55:27.892143    8893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:55:27.909338    8893 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:55:27.909374    8893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:55:27.958532    8893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:55:27.958589    8893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:55:27.958646    8893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:55:28.008529    8893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:55:28.011640    8893 out.go:235]   - Generating certificates and keys ...
	I0920 10:55:28.011683    8893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:55:28.011719    8893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:55:28.011762    8893 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:55:28.011792    8893 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:55:28.011835    8893 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:55:28.011864    8893 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:55:28.011897    8893 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:55:28.011928    8893 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:55:28.011965    8893 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:55:28.012003    8893 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:55:28.012022    8893 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:55:28.012052    8893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:55:28.129163    8893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:55:28.180658    8893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:55:28.275368    8893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:55:28.413015    8893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:55:28.441925    8893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:55:28.442227    8893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:55:28.442287    8893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:55:28.536782    8893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:55:28.544905    8893 out.go:235]   - Booting up control plane ...
	I0920 10:55:28.544955    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:55:28.544990    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:55:28.545020    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:55:28.545054    8893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:55:28.545159    8893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:55:33.042695    8893 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501714 seconds
	I0920 10:55:33.042802    8893 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:55:33.047375    8893 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:55:33.558667    8893 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:55:33.558829    8893 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-568000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:55:34.061934    8893 kubeadm.go:310] [bootstrap-token] Using token: m87ix1.kgyx5cadz2riz65a
	I0920 10:55:34.066069    8893 out.go:235]   - Configuring RBAC rules ...
	I0920 10:55:34.066130    8893 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:55:34.068496    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:55:34.073671    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:55:34.074454    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:55:34.075232    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:55:34.076115    8893 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:55:34.079984    8893 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:55:34.253905    8893 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:55:34.470388    8893 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:55:34.471191    8893 kubeadm.go:310] 
	I0920 10:55:34.471224    8893 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:55:34.471227    8893 kubeadm.go:310] 
	I0920 10:55:34.471280    8893 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:55:34.471284    8893 kubeadm.go:310] 
	I0920 10:55:34.471296    8893 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:55:34.471328    8893 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:55:34.471357    8893 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:55:34.471362    8893 kubeadm.go:310] 
	I0920 10:55:34.471398    8893 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:55:34.471403    8893 kubeadm.go:310] 
	I0920 10:55:34.471425    8893 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:55:34.471429    8893 kubeadm.go:310] 
	I0920 10:55:34.471514    8893 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:55:34.471618    8893 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:55:34.471747    8893 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:55:34.471754    8893 kubeadm.go:310] 
	I0920 10:55:34.471828    8893 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:55:34.471865    8893 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:55:34.471868    8893 kubeadm.go:310] 
	I0920 10:55:34.471908    8893 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m87ix1.kgyx5cadz2riz65a \
	I0920 10:55:34.471989    8893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa \
	I0920 10:55:34.472006    8893 kubeadm.go:310] 	--control-plane 
	I0920 10:55:34.472011    8893 kubeadm.go:310] 
	I0920 10:55:34.472064    8893 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:55:34.472071    8893 kubeadm.go:310] 
	I0920 10:55:34.472120    8893 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m87ix1.kgyx5cadz2riz65a \
	I0920 10:55:34.472280    8893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa 
	I0920 10:55:34.472351    8893 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:55:34.472360    8893 cni.go:84] Creating CNI manager for ""
	I0920 10:55:34.472369    8893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:55:34.476079    8893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:55:34.486131    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:55:34.489828    8893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:55:34.495414    8893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:55:34.495481    8893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-568000 minikube.k8s.io/updated_at=2024_09_20T10_55_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=running-upgrade-568000 minikube.k8s.io/primary=true
	I0920 10:55:34.495483    8893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:55:34.544222    8893 kubeadm.go:1113] duration metric: took 48.801042ms to wait for elevateKubeSystemPrivileges
	I0920 10:55:34.544264    8893 ops.go:34] apiserver oom_adj: -16
	I0920 10:55:34.545652    8893 kubeadm.go:394] duration metric: took 4m12.315917208s to StartCluster
	I0920 10:55:34.545667    8893 settings.go:142] acquiring lock: {Name:mk5f352888690de611711a90a16fd3b08e6afbf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:34.545828    8893 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:55:34.546214    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:34.546431    8893 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:55:34.546440    8893 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:55:34.546469    8893 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-568000"
	I0920 10:55:34.546478    8893 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-568000"
	W0920 10:55:34.546481    8893 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:55:34.546502    8893 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0920 10:55:34.546504    8893 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:55:34.546509    8893 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-568000"
	I0920 10:55:34.546520    8893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-568000"
	I0920 10:55:34.547353    8893 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eae030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:55:34.547484    8893 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-568000"
	W0920 10:55:34.547489    8893 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:55:34.547496    8893 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0920 10:55:34.551140    8893 out.go:177] * Verifying Kubernetes components...
	I0920 10:55:34.551449    8893 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:55:34.555460    8893 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:55:34.555483    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:55:34.559027    8893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:55:34.563209    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:55:34.567211    8893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:55:34.567219    8893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:55:34.567227    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:55:34.635441    8893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:55:34.640503    8893 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:55:34.640552    8893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:55:34.644921    8893 api_server.go:72] duration metric: took 98.4795ms to wait for apiserver process to appear ...
	I0920 10:55:34.644929    8893 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:55:34.644936    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:34.661487    8893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:55:34.701014    8893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:55:35.018489    8893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:55:35.018499    8893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:55:39.647030    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:39.647076    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:44.647427    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:44.647474    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:49.647813    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:49.647844    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:54.648252    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:54.648277    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:59.648850    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:59.648872    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:04.649586    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:04.649623    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:56:05.020754    8893 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:56:05.025067    8893 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:56:05.036004    8893 addons.go:510] duration metric: took 30.489724042s for enable addons: enabled=[storage-provisioner]
	I0920 10:56:09.650572    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:09.650619    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:14.652001    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:14.652061    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:19.653583    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:19.653608    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:24.653862    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:24.653912    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:29.655920    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:29.655959    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:34.658170    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:34.658276    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:34.669851    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:34.669949    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:34.681453    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:34.681541    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:34.692244    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:34.692331    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:34.703067    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:34.703149    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:34.714895    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:34.714994    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:34.726321    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:34.726405    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:34.737227    8893 logs.go:276] 0 containers: []
	W0920 10:56:34.737239    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:34.737331    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:34.748883    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:34.748900    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:34.748907    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:34.786262    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:34.786272    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:34.802291    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:34.802303    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:34.817870    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:34.817888    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:34.830460    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:34.830472    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:34.842528    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:34.842539    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:34.868179    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:34.868190    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:34.880456    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:34.880472    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:34.885402    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:34.885410    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:34.924466    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:34.924482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:34.940942    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:34.940953    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:34.953263    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:34.953275    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:34.971942    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:34.971962    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:37.488598    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:42.490948    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:42.491180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:42.506590    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:42.506690    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:42.519245    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:42.519329    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:42.533093    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:42.533179    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:42.544286    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:42.544364    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:42.555470    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:42.555552    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:42.567049    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:42.567130    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:42.588399    8893 logs.go:276] 0 containers: []
	W0920 10:56:42.588410    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:42.588473    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:42.599331    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:42.599348    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:42.599355    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:42.615974    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:42.615991    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:42.641709    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:42.641721    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:42.679487    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:42.679500    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:42.692940    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:42.692953    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:42.705000    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:42.705012    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:42.719332    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:42.719347    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:42.732005    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:42.732016    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:42.754118    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:42.754131    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:42.766967    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:42.766979    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:42.779629    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:42.779642    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:42.815104    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:42.815116    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:42.820047    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:42.820058    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:45.341294    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:50.343633    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:50.343841    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:50.363987    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:50.364095    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:50.376222    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:50.376301    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:50.388797    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:50.388883    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:50.399090    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:50.399168    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:50.409749    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:50.409833    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:50.420549    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:50.420628    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:50.432508    8893 logs.go:276] 0 containers: []
	W0920 10:56:50.432522    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:50.432588    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:50.443989    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:50.444005    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:50.444011    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:50.482465    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:50.482478    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:50.497915    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:50.497925    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:50.512978    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:50.512988    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:50.525442    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:50.525458    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:50.550805    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:50.550828    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:50.564447    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:50.564462    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:50.576678    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:50.576688    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:50.612545    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:50.612564    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:50.617880    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:50.617891    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:50.631245    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:50.631257    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:50.647430    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:50.647447    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:50.660637    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:50.660653    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:53.180813    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:58.183161    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:58.183449    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:58.203476    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:58.203595    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:58.217782    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:58.217874    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:58.230582    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:58.230666    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:58.241436    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:58.241512    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:58.252006    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:58.252093    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:58.262800    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:58.262872    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:58.272716    8893 logs.go:276] 0 containers: []
	W0920 10:56:58.272731    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:58.272799    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:58.284915    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:58.284931    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:58.284937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:58.303237    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:58.303252    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:58.315303    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:58.315313    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:58.339005    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:58.339019    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:58.344717    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:58.344726    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:58.385317    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:58.385326    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:58.400162    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:58.400175    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:58.412818    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:58.412829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:58.429310    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:58.429325    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:58.442281    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:58.442293    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:58.454720    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:58.454734    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:58.490500    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:58.490514    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:58.507982    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:58.507993    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:01.022738    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:06.024970    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:06.025184    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:06.038390    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:06.038483    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:06.049569    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:06.049659    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:06.060301    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:06.060382    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:06.070710    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:06.070788    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:06.081018    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:06.081104    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:06.095146    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:06.095234    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:06.105979    8893 logs.go:276] 0 containers: []
	W0920 10:57:06.105990    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:06.106065    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:06.116814    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:06.116826    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:06.116831    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:06.128709    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:06.128723    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:06.140400    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:06.140414    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:06.163594    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:06.163602    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:06.175992    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:06.176008    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:06.210549    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:06.210560    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:06.226708    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:06.226718    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:06.240964    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:06.240974    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:06.252528    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:06.252538    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:06.263954    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:06.263964    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:06.280256    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:06.280273    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:06.299187    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:06.299202    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:06.304291    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:06.304303    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:08.848138    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:13.850764    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:13.851246    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:13.881435    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:13.881592    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:13.900201    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:13.900314    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:13.914727    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:13.914813    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:13.926850    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:13.926934    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:13.937685    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:13.937774    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:13.952248    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:13.952336    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:13.962354    8893 logs.go:276] 0 containers: []
	W0920 10:57:13.962371    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:13.962441    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:13.972894    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:13.972913    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:13.972918    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:13.985271    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:13.985282    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:14.001153    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:14.001163    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:14.029987    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:14.030002    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:14.041336    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:14.041346    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:14.053647    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:14.053663    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:14.058265    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:14.058275    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:14.072803    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:14.072818    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:14.086461    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:14.086470    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:14.099052    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:14.099064    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:14.122614    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:14.122621    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:14.155209    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:14.155219    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:14.192139    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:14.192148    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:16.706647    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:21.709012    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:21.709298    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:21.734327    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:21.734454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:21.750072    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:21.750153    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:21.762462    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:21.762538    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:21.773385    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:21.773458    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:21.784146    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:21.784216    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:21.801707    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:21.801799    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:21.817463    8893 logs.go:276] 0 containers: []
	W0920 10:57:21.817478    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:21.817554    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:21.830797    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:21.830812    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:21.830818    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:21.845113    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:21.845126    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:21.859428    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:21.859443    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:21.871727    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:21.871738    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:21.889556    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:21.889568    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:21.901457    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:21.901471    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:21.918788    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:21.918798    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:21.923209    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:21.923216    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:21.994505    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:21.994515    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:22.006428    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:22.006438    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:22.018655    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:22.018670    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:22.043838    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:22.043855    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:22.055106    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:22.055119    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:24.592035    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:29.592969    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:29.593280    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:29.625115    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:29.625224    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:29.640851    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:29.640934    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:29.652342    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:29.652419    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:29.662385    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:29.662473    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:29.676746    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:29.676830    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:29.687477    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:29.687557    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:29.697996    8893 logs.go:276] 0 containers: []
	W0920 10:57:29.698008    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:29.698082    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:29.708738    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:29.708757    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:29.708763    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:29.741957    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:29.741965    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:29.778184    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:29.778194    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:29.792873    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:29.792886    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:29.806718    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:29.806730    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:29.819066    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:29.819076    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:29.831226    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:29.831234    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:29.835786    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:29.835795    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:29.847670    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:29.847685    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:29.863283    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:29.863292    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:29.881216    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:29.881229    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:29.896987    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:29.896997    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:29.919858    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:29.919866    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:32.432965    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:37.435201    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:37.435427    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:37.452032    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:37.452131    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:37.464408    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:37.464495    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:37.474769    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:37.474846    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:37.489133    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:37.489207    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:37.499866    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:37.499944    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:37.510700    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:37.510770    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:37.521369    8893 logs.go:276] 0 containers: []
	W0920 10:57:37.521381    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:37.521446    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:37.536800    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:37.536817    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:37.536822    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:37.541316    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:37.541324    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:37.555538    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:37.555553    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:37.569379    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:37.569390    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:37.581443    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:37.581454    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:37.596308    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:37.596317    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:37.613934    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:37.613945    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:37.625647    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:37.625658    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:37.658054    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:37.658063    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:37.669753    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:37.669765    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:37.681204    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:37.681214    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:37.706024    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:37.706035    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:37.719135    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:37.719148    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:40.257154    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:45.259981    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:45.260136    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:45.274845    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:45.274938    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:45.286890    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:45.286964    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:45.298593    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:45.298673    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:45.309668    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:45.309739    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:45.320121    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:45.320198    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:45.330546    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:45.330633    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:45.340701    8893 logs.go:276] 0 containers: []
	W0920 10:57:45.340713    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:45.340779    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:45.351138    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:45.351153    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:45.351159    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:45.362918    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:45.362929    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:45.381516    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:45.381531    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:45.393284    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:45.393299    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:45.429714    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:45.429724    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:45.465582    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:45.465593    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:45.488264    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:45.488274    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:45.502125    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:45.502135    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:45.514506    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:45.514516    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:45.519024    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:45.519030    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:45.530657    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:45.530671    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:45.546469    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:45.546480    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:45.571463    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:45.571473    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:48.084702    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:53.087064    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:53.087403    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:53.116099    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:53.116241    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:53.134672    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:53.134771    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:53.147925    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:57:53.148016    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:53.159009    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:53.159093    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:53.169603    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:53.169682    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:53.181484    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:53.181564    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:53.192054    8893 logs.go:276] 0 containers: []
	W0920 10:57:53.192067    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:53.192140    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:53.205877    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:53.205898    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:53.205904    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:53.218166    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:53.218179    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:53.242376    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:53.242384    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:53.247226    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:53.247233    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:53.265802    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:53.265813    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:53.302300    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:53.302312    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:53.316022    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:53.316035    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:53.328232    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:53.328243    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:53.345567    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:53.345581    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:53.357230    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:53.357243    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:53.391823    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:53.391836    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:53.404076    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:57:53.404090    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:57:53.415233    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:53.415246    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:53.433601    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:53.433611    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:53.447422    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:57:53.447432    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:57:55.960163    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:00.962525    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:00.962812    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:00.983455    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:00.983576    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:00.998705    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:00.998795    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:01.012545    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:01.012631    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:01.024124    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:01.024212    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:01.034906    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:01.034990    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:01.045308    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:01.045389    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:01.055092    8893 logs.go:276] 0 containers: []
	W0920 10:58:01.055108    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:01.055180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:01.066167    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:01.066188    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:01.066194    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:01.080647    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:01.080657    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:01.102774    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:01.102785    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:01.114926    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:01.114938    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:01.132343    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:01.132353    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:01.136799    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:01.136809    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:01.176008    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:01.176024    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:01.188442    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:01.188453    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:01.212902    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:01.212910    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:01.237856    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:01.237872    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:01.250880    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:01.250892    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:01.265334    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:01.265345    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:01.277758    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:01.277767    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:01.311798    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:01.311814    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:01.323086    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:01.323097    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:03.836664    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:08.839091    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:08.839624    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:08.875654    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:08.875808    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:08.895908    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:08.896027    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:08.910524    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:08.910628    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:08.922949    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:08.923032    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:08.934239    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:08.934325    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:08.945088    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:08.945176    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:08.955672    8893 logs.go:276] 0 containers: []
	W0920 10:58:08.955687    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:08.955756    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:08.966243    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:08.966262    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:08.966267    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:08.991123    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:08.991131    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:09.005805    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:09.005820    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:09.018406    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:09.018417    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:09.031047    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:09.031057    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:09.043041    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:09.043053    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:09.058956    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:09.058973    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:09.071335    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:09.071345    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:09.108070    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:09.108080    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:09.122816    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:09.122828    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:09.139189    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:09.139211    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:09.151430    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:09.151441    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:09.169162    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:09.169173    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:09.173628    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:09.173635    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:09.185269    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:09.185280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:11.718017    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:16.720292    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:16.720554    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:16.739039    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:16.739152    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:16.752510    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:16.752594    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:16.764682    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:16.764749    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:16.774623    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:16.774688    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:16.785849    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:16.785930    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:16.796576    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:16.796650    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:16.806940    8893 logs.go:276] 0 containers: []
	W0920 10:58:16.806955    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:16.807016    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:16.817490    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:16.817507    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:16.817512    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:16.852416    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:16.852430    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:16.864006    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:16.864017    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:16.876319    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:16.876334    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:16.892085    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:16.892096    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:16.903741    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:16.903752    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:16.938302    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:16.938313    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:16.958341    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:16.958367    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:16.969515    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:16.969530    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:16.981550    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:16.981560    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:16.993379    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:16.993390    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:16.997885    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:16.997891    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:17.012046    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:17.012056    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:17.029003    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:17.029016    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:17.053783    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:17.053792    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:19.565526    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:24.567725    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:24.567861    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:24.579445    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:24.579546    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:24.594221    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:24.594297    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:24.604697    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:24.604783    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:24.615486    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:24.615571    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:24.626216    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:24.626302    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:24.637141    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:24.637224    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:24.647310    8893 logs.go:276] 0 containers: []
	W0920 10:58:24.647320    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:24.647391    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:24.657738    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:24.657756    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:24.657762    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:24.669841    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:24.669851    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:24.704968    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:24.704979    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:24.725137    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:24.725147    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:24.741133    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:24.741148    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:24.752948    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:24.752961    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:24.776808    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:24.776819    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:24.781260    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:24.781270    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:24.796816    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:24.796830    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:24.814243    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:24.814255    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:24.825420    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:24.825429    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:24.837195    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:24.837208    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:24.848342    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:24.848354    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:24.859942    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:24.859952    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:24.894395    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:24.894406    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:27.410497    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:32.412823    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:32.413060    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:32.428348    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:32.428454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:32.441486    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:32.441581    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:32.452003    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:32.452083    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:32.462993    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:32.463076    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:32.474004    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:32.474087    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:32.485073    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:32.485147    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:32.495653    8893 logs.go:276] 0 containers: []
	W0920 10:58:32.495675    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:32.495749    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:32.505982    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:32.506000    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:32.506005    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:32.517691    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:32.517705    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:32.543210    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:32.543219    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:32.548039    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:32.548048    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:32.559693    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:32.559703    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:32.577257    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:32.577269    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:32.611913    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:32.611920    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:32.623972    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:32.623982    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:32.641241    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:32.641251    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:32.656562    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:32.656572    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:32.675055    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:32.675065    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:32.695300    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:32.695310    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:32.706955    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:32.706965    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:32.718767    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:32.718778    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:32.756669    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:32.756680    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:35.280016    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:40.282450    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:40.282951    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:40.323156    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:40.323321    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:40.346220    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:40.346312    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:40.359121    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:40.359199    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:40.377181    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:40.377262    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:40.388182    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:40.388274    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:40.403309    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:40.403401    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:40.413746    8893 logs.go:276] 0 containers: []
	W0920 10:58:40.413758    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:40.413838    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:40.428781    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:40.428803    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:40.428809    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:40.461981    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:40.461995    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:40.483522    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:40.483537    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:40.496038    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:40.496049    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:40.520861    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:40.520869    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:40.555458    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:40.555472    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:40.570039    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:40.570051    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:40.581926    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:40.581937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:40.593274    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:40.593285    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:40.597713    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:40.597721    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:40.608995    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:40.609003    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:40.621034    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:40.621045    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:40.636771    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:40.636781    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:40.654841    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:40.654853    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:40.666486    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:40.666497    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:43.180412    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:48.181309    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:48.181480    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:48.196116    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:48.196223    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:48.207620    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:48.207699    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:48.218640    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:48.218733    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:48.228936    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:48.229014    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:48.239544    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:48.239625    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:48.249786    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:48.249871    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:48.261442    8893 logs.go:276] 0 containers: []
	W0920 10:58:48.261452    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:48.261520    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:48.271572    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:48.271591    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:48.271596    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:48.286163    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:48.286173    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:48.299221    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:48.299234    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:48.311019    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:48.311034    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:48.322644    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:48.322656    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:48.335150    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:48.335161    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:48.354255    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:48.354264    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:48.379248    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:48.379258    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:48.384088    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:48.384095    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:48.398377    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:48.398387    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:48.410041    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:48.410055    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:48.421637    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:48.421647    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:48.433524    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:48.433538    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:48.466045    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:48.466054    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:48.504573    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:48.504585    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:51.025228    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:56.026327    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:56.026443    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:56.038331    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:56.038417    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:56.049461    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:56.049540    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:56.061306    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:56.061396    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:56.077817    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:56.077906    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:56.089530    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:56.089614    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:56.102985    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:56.103068    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:56.115205    8893 logs.go:276] 0 containers: []
	W0920 10:58:56.115216    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:56.115283    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:56.126535    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:56.126552    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:56.126558    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:56.162931    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:56.162945    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:56.176210    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:56.176224    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:56.188854    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:56.188869    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:56.204840    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:56.204852    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:56.217470    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:56.217479    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:56.254203    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:56.254229    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:56.270129    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:56.270142    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:56.282393    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:56.282406    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:56.294700    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:56.294713    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:56.307274    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:56.307286    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:56.326921    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:56.326937    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:56.355021    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:56.355038    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:56.360469    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:56.360482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:56.375892    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:56.375903    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:58.888549    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:03.890735    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:03.890929    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:03.902635    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:03.902732    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:03.913879    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:03.913961    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:03.924651    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:03.924731    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:03.936427    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:03.936501    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:03.947400    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:03.947487    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:03.958168    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:03.958248    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:03.973240    8893 logs.go:276] 0 containers: []
	W0920 10:59:03.973251    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:03.973314    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:03.983923    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:03.983941    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:03.983946    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:03.995507    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:03.995520    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:04.000259    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:04.000266    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:04.014168    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:04.014183    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:04.029210    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:04.029223    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:04.051843    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:04.051851    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:04.087030    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:04.087042    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:04.102206    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:04.102219    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:04.117364    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:04.117377    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:04.129198    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:04.129212    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:04.145269    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:04.145282    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:04.179423    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:04.179432    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:04.191431    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:04.191444    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:04.203209    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:04.203223    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:04.215014    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:04.215028    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:06.734673    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:11.736960    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:11.737077    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:11.749993    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:11.750081    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:11.760720    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:11.760805    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:11.771453    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:11.771540    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:11.782381    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:11.782466    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:11.792851    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:11.792938    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:11.803481    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:11.803561    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:11.815916    8893 logs.go:276] 0 containers: []
	W0920 10:59:11.815932    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:11.816009    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:11.828232    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:11.828251    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:11.828257    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:11.865280    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:11.865292    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:11.878265    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:11.878277    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:11.891806    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:11.891819    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:11.906000    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:11.906012    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:11.940511    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:11.940522    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:11.952369    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:11.952382    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:11.967723    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:11.967737    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:11.986133    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:11.986144    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:11.998282    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:11.998302    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:12.014606    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:12.014624    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:12.027606    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:12.027619    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:12.032539    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:12.032552    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:12.045348    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:12.045360    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:12.064172    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:12.064182    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:14.589911    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:19.592267    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:19.592673    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:19.638428    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:19.638541    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:19.653292    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:19.653392    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:19.665755    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:19.665845    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:19.676433    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:19.676508    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:19.687161    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:19.687244    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:19.697790    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:19.697875    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:19.707731    8893 logs.go:276] 0 containers: []
	W0920 10:59:19.707747    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:19.707813    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:19.718477    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:19.718498    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:19.718503    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:19.730961    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:19.730975    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:19.766798    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:19.766808    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:19.784882    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:19.784893    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:19.797275    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:19.797291    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:19.810485    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:19.810498    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:19.834427    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:19.834435    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:19.869440    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:19.869451    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:19.886724    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:19.886735    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:19.898529    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:19.898538    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:19.913691    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:19.913702    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:19.925757    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:19.925771    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:19.943270    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:19.943280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:19.948484    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:19.948494    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:19.969233    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:19.969243    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:22.483858    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:27.486078    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:27.486363    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:27.514269    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:27.514415    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:27.532365    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:27.532451    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:27.546135    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:27.546212    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:27.557271    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:27.557344    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:27.568277    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:27.568362    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:27.579181    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:27.579268    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:27.590114    8893 logs.go:276] 0 containers: []
	W0920 10:59:27.590130    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:27.590204    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:27.600877    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:27.600898    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:27.600904    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:27.617320    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:27.617330    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:27.629080    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:27.629094    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:27.645250    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:27.645260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:27.657357    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:27.657372    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:27.681650    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:27.681657    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:27.714691    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:27.714697    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:27.725943    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:27.725954    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:27.738051    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:27.738062    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:27.751877    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:27.751893    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:27.763985    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:27.763997    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:27.781924    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:27.781940    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:27.796962    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:27.796971    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:27.832378    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:27.832389    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:27.848133    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:27.848144    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:30.354957    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:35.357287    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:35.362994    8893 out.go:201] 
	W0920 10:59:35.366897    8893 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:59:35.366913    8893 out.go:270] * 
	* 
	W0920 10:59:35.368040    8893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:59:35.382928    8893 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-20 10:59:35.486845 -0700 PDT m=+1272.267918251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-568000 -n running-upgrade-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-568000 -n running-upgrade-568000: exit status 2 (15.699165083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-568000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-239000          | force-systemd-flag-239000 | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-969000              | force-systemd-env-969000  | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-969000           | force-systemd-env-969000  | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT | 20 Sep 24 10:49 PDT |
	| start   | -p docker-flags-208000                | docker-flags-208000       | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-239000             | force-systemd-flag-239000 | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-239000          | force-systemd-flag-239000 | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT | 20 Sep 24 10:49 PDT |
	| start   | -p cert-expiration-031000             | cert-expiration-031000    | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-208000 ssh               | docker-flags-208000       | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-208000 ssh               | docker-flags-208000       | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-208000                | docker-flags-208000       | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT | 20 Sep 24 10:49 PDT |
	| start   | -p cert-options-654000                | cert-options-654000       | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-654000 ssh               | cert-options-654000       | jenkins | v1.34.0 | 20 Sep 24 10:50 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-654000 -- sudo        | cert-options-654000       | jenkins | v1.34.0 | 20 Sep 24 10:50 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-654000                | cert-options-654000       | jenkins | v1.34.0 | 20 Sep 24 10:50 PDT | 20 Sep 24 10:50 PDT |
	| start   | -p running-upgrade-568000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:50 PDT | 20 Sep 24 10:50 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-568000             | running-upgrade-568000    | jenkins | v1.34.0 | 20 Sep 24 10:50 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-031000             | cert-expiration-031000    | jenkins | v1.34.0 | 20 Sep 24 10:52 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-031000             | cert-expiration-031000    | jenkins | v1.34.0 | 20 Sep 24 10:53 PDT | 20 Sep 24 10:53 PDT |
	| start   | -p kubernetes-upgrade-744000          | kubernetes-upgrade-744000 | jenkins | v1.34.0 | 20 Sep 24 10:53 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-744000          | kubernetes-upgrade-744000 | jenkins | v1.34.0 | 20 Sep 24 10:53 PDT | 20 Sep 24 10:53 PDT |
	| start   | -p kubernetes-upgrade-744000          | kubernetes-upgrade-744000 | jenkins | v1.34.0 | 20 Sep 24 10:53 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-744000          | kubernetes-upgrade-744000 | jenkins | v1.34.0 | 20 Sep 24 10:53 PDT | 20 Sep 24 10:53 PDT |
	| start   | -p stopped-upgrade-423000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:53 PDT | 20 Sep 24 10:54 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-423000 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:54 PDT | 20 Sep 24 10:54 PDT |
	| start   | -p stopped-upgrade-423000             | stopped-upgrade-423000    | jenkins | v1.34.0 | 20 Sep 24 10:54 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:54:15
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:54:15.644221    9036 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:54:15.644373    9036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:54:15.644377    9036 out.go:358] Setting ErrFile to fd 2...
	I0920 10:54:15.644380    9036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:54:15.644507    9036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:54:15.645634    9036 out.go:352] Setting JSON to false
	I0920 10:54:15.663750    9036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5018,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:54:15.663866    9036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:54:15.668896    9036 out.go:177] * [stopped-upgrade-423000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:54:15.676866    9036 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:54:15.676909    9036 notify.go:220] Checking for updates...
	I0920 10:54:15.684840    9036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:54:15.687880    9036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:54:15.690896    9036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:54:15.693816    9036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:54:15.696882    9036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:54:15.700177    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:54:15.702857    9036 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:54:15.705859    9036 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:54:15.709794    9036 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:54:15.716839    9036 start.go:297] selected driver: qemu2
	I0920 10:54:15.716844    9036 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:15.716895    9036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:54:15.719696    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:54:15.719735    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:54:15.719753    9036 start.go:340] cluster config:
	{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:15.719809    9036 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:54:15.727811    9036 out.go:177] * Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	I0920 10:54:15.731822    9036 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:54:15.731854    9036 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:54:15.731861    9036 cache.go:56] Caching tarball of preloaded images
	I0920 10:54:15.731939    9036 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:54:15.731946    9036 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:54:15.731997    9036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0920 10:54:15.732397    9036 start.go:360] acquireMachinesLock for stopped-upgrade-423000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:54:15.732431    9036 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "stopped-upgrade-423000"
	I0920 10:54:15.732440    9036 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:54:15.732446    9036 fix.go:54] fixHost starting: 
	I0920 10:54:15.732553    9036 fix.go:112] recreateIfNeeded on stopped-upgrade-423000: state=Stopped err=<nil>
	W0920 10:54:15.732561    9036 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:54:15.736774    9036 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	I0920 10:54:15.459675    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:15.460292    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:15.496604    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:15.496772    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:15.517086    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:15.517206    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:15.531978    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:15.532075    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:15.544194    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:15.544281    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:15.555170    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:15.555258    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:15.565732    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:15.565814    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:15.576149    8893 logs.go:276] 0 containers: []
	W0920 10:54:15.576164    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:15.576226    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:15.586365    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:15.586383    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:15.586388    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:15.601654    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:15.601666    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:15.614658    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:15.614671    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:15.653493    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:15.653510    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:15.692454    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:15.692462    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:15.716818    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:15.716829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:15.734981    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:15.734989    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:15.746807    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:15.746817    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:15.758682    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:15.758693    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:15.763034    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:15.763044    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:15.800980    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:15.800992    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:15.816873    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:15.816888    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:15.834734    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:15.834748    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:15.848359    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:15.848370    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:15.860240    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:15.860250    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:15.880288    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:15.880306    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:15.893634    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:15.893647    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:15.744751    9036 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:54:15.744847    9036 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51506-:22,hostfwd=tcp::51507-:2376,hostname=stopped-upgrade-423000 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/disk.qcow2
	I0920 10:54:15.794993    9036 main.go:141] libmachine: STDOUT: 
	I0920 10:54:15.795019    9036 main.go:141] libmachine: STDERR: 
	I0920 10:54:15.795026    9036 main.go:141] libmachine: Waiting for VM to start (ssh -p 51506 docker@127.0.0.1)...
	I0920 10:54:18.408933    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:23.411223    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:23.411358    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:23.424936    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:23.425015    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:23.436411    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:23.436480    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:23.447101    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:23.447169    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:23.457593    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:23.457681    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:23.467849    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:23.467931    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:23.478997    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:23.479082    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:23.489823    8893 logs.go:276] 0 containers: []
	W0920 10:54:23.489836    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:23.489910    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:23.500343    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:23.500362    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:23.500368    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:23.538740    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:23.538748    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:23.552767    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:23.552777    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:23.563905    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:23.563915    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:23.599879    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:23.599892    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:23.639724    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:23.639737    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:23.651322    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:23.651333    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:23.675091    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:23.675098    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:23.688826    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:23.688841    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:23.701068    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:23.701079    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:23.714505    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:23.714515    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:23.719050    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:23.719059    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:23.736972    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:23.736983    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:23.748291    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:23.748301    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:23.766348    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:23.766358    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:23.781890    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:23.781900    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:23.799322    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:23.799331    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:26.313488    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:31.315815    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:31.316412    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:31.351587    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:31.351748    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:31.382878    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:31.382980    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:31.396341    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:31.396413    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:31.408008    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:31.408079    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:31.418613    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:31.418695    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:31.428770    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:31.428848    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:31.439519    8893 logs.go:276] 0 containers: []
	W0920 10:54:31.439537    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:31.439601    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:31.450461    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:31.450477    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:31.450482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:31.465075    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:31.465088    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:31.480856    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:31.480868    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:31.493011    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:31.493025    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:31.497251    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:31.497261    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:31.532547    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:31.532559    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:31.546824    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:31.546833    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:31.557682    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:31.557694    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:31.569523    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:31.569533    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:31.581922    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:31.581933    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:31.606849    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:31.606862    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:31.619163    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:31.619176    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:31.654405    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:31.654411    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:31.675064    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:31.675073    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:31.686381    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:31.686389    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:31.703563    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:31.703574    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:31.715251    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:31.715260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:34.254063    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:35.801924    9036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0920 10:54:35.802835    9036 machine.go:93] provisionDockerMachine start ...
	I0920 10:54:35.803228    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.803777    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.803793    9036 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:54:35.877551    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:54:35.877586    9036 buildroot.go:166] provisioning hostname "stopped-upgrade-423000"
	I0920 10:54:35.877721    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.877984    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.877995    9036 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-423000 && echo "stopped-upgrade-423000" | sudo tee /etc/hostname
	I0920 10:54:35.944348    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-423000
	
	I0920 10:54:35.944454    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.944656    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.944670    9036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-423000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-423000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-423000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:54:36.000537    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:54:36.000552    9036 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19678-6679/.minikube CaCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19678-6679/.minikube}
	I0920 10:54:36.000562    9036 buildroot.go:174] setting up certificates
	I0920 10:54:36.000567    9036 provision.go:84] configureAuth start
	I0920 10:54:36.000571    9036 provision.go:143] copyHostCerts
	I0920 10:54:36.000659    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem, removing ...
	I0920 10:54:36.000667    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem
	I0920 10:54:36.000831    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem (1123 bytes)
	I0920 10:54:36.001051    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem, removing ...
	I0920 10:54:36.001056    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem
	I0920 10:54:36.001119    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem (1675 bytes)
	I0920 10:54:36.001255    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem, removing ...
	I0920 10:54:36.001259    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem
	I0920 10:54:36.001341    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem (1078 bytes)
	I0920 10:54:36.001446    9036 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-423000 san=[127.0.0.1 localhost minikube stopped-upgrade-423000]
	I0920 10:54:36.157516    9036 provision.go:177] copyRemoteCerts
	I0920 10:54:36.157575    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:54:36.157587    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.185054    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:54:36.191816    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 10:54:36.199189    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:54:36.206159    9036 provision.go:87] duration metric: took 205.583458ms to configureAuth
	I0920 10:54:36.206169    9036 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:54:36.206295    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:54:36.206344    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.206431    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.206436    9036 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:54:36.256794    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:54:36.256804    9036 buildroot.go:70] root file system type: tmpfs
	I0920 10:54:36.256861    9036 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:54:36.256918    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.257028    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.257065    9036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:54:36.312513    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:54:36.312573    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.312682    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.312698    9036 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:54:36.671447    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:54:36.671461    9036 machine.go:96] duration metric: took 868.618792ms to provisionDockerMachine
	I0920 10:54:36.671467    9036 start.go:293] postStartSetup for "stopped-upgrade-423000" (driver="qemu2")
	I0920 10:54:36.671474    9036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:54:36.671543    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:54:36.671553    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.699766    9036 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:54:36.701111    9036 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:54:36.701117    9036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/addons for local assets ...
	I0920 10:54:36.701207    9036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/files for local assets ...
	I0920 10:54:36.701345    9036 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem -> 71912.pem in /etc/ssl/certs
	I0920 10:54:36.701485    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:54:36.704091    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:54:36.711188    9036 start.go:296] duration metric: took 39.716167ms for postStartSetup
	I0920 10:54:36.711204    9036 fix.go:56] duration metric: took 20.978870541s for fixHost
	I0920 10:54:36.711243    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.711346    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.711351    9036 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:54:36.760934    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854877.180929837
	
	I0920 10:54:36.760941    9036 fix.go:216] guest clock: 1726854877.180929837
	I0920 10:54:36.760944    9036 fix.go:229] Guest: 2024-09-20 10:54:37.180929837 -0700 PDT Remote: 2024-09-20 10:54:36.711206 -0700 PDT m=+21.088847418 (delta=469.723837ms)
	I0920 10:54:36.760955    9036 fix.go:200] guest clock delta is within tolerance: 469.723837ms
	I0920 10:54:36.760957    9036 start.go:83] releasing machines lock for "stopped-upgrade-423000", held for 21.028633667s
	I0920 10:54:36.761029    9036 ssh_runner.go:195] Run: cat /version.json
	I0920 10:54:36.761034    9036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:54:36.761051    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.761052    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	W0920 10:54:36.761597    9036 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51506: connect: connection refused
	I0920 10:54:36.761615    9036 retry.go:31] will retry after 356.261874ms: dial tcp [::1]:51506: connect: connection refused
	W0920 10:54:37.164759    9036 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:54:37.164941    9036 ssh_runner.go:195] Run: systemctl --version
	I0920 10:54:37.168613    9036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:54:37.171997    9036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:54:37.172054    9036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:54:37.177217    9036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:54:37.184232    9036 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:54:37.184245    9036 start.go:495] detecting cgroup driver to use...
	I0920 10:54:37.184353    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:54:37.193850    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:54:37.197557    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:54:37.201019    9036 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:54:37.201057    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:54:37.204480    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:54:37.208034    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:54:37.211531    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:54:37.214611    9036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:54:37.217401    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:54:37.220107    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:54:37.223391    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:54:37.226615    9036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:54:37.229029    9036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:54:37.231929    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:37.315576    9036 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:54:37.321894    9036 start.go:495] detecting cgroup driver to use...
	I0920 10:54:37.321967    9036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:54:37.327421    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:54:37.332551    9036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:54:37.342722    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:54:37.347296    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:54:37.351860    9036 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:54:37.411531    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:54:37.416513    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:54:37.421712    9036 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:54:37.422875    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:54:37.425311    9036 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:54:37.430057    9036 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:54:37.517375    9036 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:54:37.608675    9036 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:54:37.608739    9036 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:54:37.613943    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:37.679251    9036 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:54:38.821318    9036 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.142053375s)
	I0920 10:54:38.821389    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:54:38.826128    9036 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:54:38.832776    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:54:38.837639    9036 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:54:38.924825    9036 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:54:39.007721    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:39.083392    9036 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:54:39.089360    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:54:39.094005    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:39.162910    9036 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:54:39.201506    9036 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:54:39.201601    9036 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:54:39.204203    9036 start.go:563] Will wait 60s for crictl version
	I0920 10:54:39.204269    9036 ssh_runner.go:195] Run: which crictl
	I0920 10:54:39.205725    9036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:54:39.219719    9036 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:54:39.219812    9036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:54:39.235312    9036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:54:39.253528    9036 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:54:39.253609    9036 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:54:39.254871    9036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:54:39.259169    9036 kubeadm.go:883] updating cluster {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:54:39.259218    9036 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:54:39.259275    9036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:54:39.274953    9036 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:54:39.274963    9036 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:54:39.275018    9036 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:54:39.278375    9036 ssh_runner.go:195] Run: which lz4
	I0920 10:54:39.279871    9036 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:54:39.281377    9036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:54:39.281392    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:54:40.264238    9036 docker.go:649] duration metric: took 984.419834ms to copy over tarball
	I0920 10:54:40.264308    9036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:54:39.254941    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:39.255027    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:39.266623    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:39.266706    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:39.280508    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:39.280566    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:39.295105    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:39.295197    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:39.307114    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:39.307201    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:39.319172    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:39.319268    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:39.331681    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:39.331770    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:39.343760    8893 logs.go:276] 0 containers: []
	W0920 10:54:39.343772    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:39.343845    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:39.355884    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:39.355903    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:39.355909    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:39.380681    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:39.380692    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:39.397131    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:39.397145    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:39.425862    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:39.425875    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:39.438621    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:39.438633    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:39.452076    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:39.452090    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:39.465023    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:39.465036    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:39.469711    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:39.469721    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:39.508296    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:39.508310    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:39.528027    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:39.528039    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:39.541072    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:39.541084    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:39.554339    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:39.554351    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:39.594071    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:39.594087    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:39.606787    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:39.606799    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:39.622988    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:39.622998    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:39.664187    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:39.664204    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:39.680070    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:39.680084    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:42.207373    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:41.418720    9036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.154403166s)
	I0920 10:54:41.418734    9036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:54:41.434217    9036 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:54:41.437059    9036 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:54:41.441940    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:41.526287    9036 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:54:43.221501    9036 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.695207542s)
	I0920 10:54:43.221604    9036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:54:43.234820    9036 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:54:43.234829    9036 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:54:43.234834    9036 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:54:43.241194    9036 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:43.242147    9036 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.244230    9036 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.244328    9036 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:43.245656    9036 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.245773    9036 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.247081    9036 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.247582    9036 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.248593    9036 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.248673    9036 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.249790    9036 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.250083    9036 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.251256    9036 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:54:43.251539    9036 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.252547    9036 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.253037    9036 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:54:43.699999    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.701753    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.703616    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.706504    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.716148    9036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:54:43.716171    9036 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.716247    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.736320    9036 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:54:43.736343    9036 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.736374    9036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:54:43.736386    9036 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.736342    9036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:54:43.736405    9036 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.736409    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.736422    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.736444    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0920 10:54:43.740789    9036 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:54:43.740945    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.741709    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.747931    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:54:43.768935    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:54:43.772652    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:54:43.772652    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:54:43.772709    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:54:43.772727    9036 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:54:43.772750    9036 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.772779    9036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:54:43.772792    9036 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.772798    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.772795    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:54:43.772826    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.784073    9036 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:54:43.784096    9036 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:54:43.784162    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:54:43.792053    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:54:43.792063    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:54:43.792086    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:54:43.792103    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:54:43.792182    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:54:43.803293    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:54:43.803424    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:54:43.803903    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:54:43.803914    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:54:43.813169    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:54:43.813198    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:54:43.848916    9036 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:54:43.848929    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:54:43.933411    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:54:43.933472    9036 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:54:43.933480    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:54:44.058319    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0920 10:54:44.104736    9036 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:54:44.104856    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.118439    9036 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:54:44.118453    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:54:44.127411    9036 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:54:44.127446    9036 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.127518    9036 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.258722    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:54:44.258746    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:54:44.258877    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:54:44.260424    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:54:44.260438    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:54:44.289488    9036 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:54:44.289509    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:54:44.518061    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:54:44.518098    9036 cache_images.go:92] duration metric: took 1.28326375s to LoadCachedImages
	W0920 10:54:44.518133    9036 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0920 10:54:44.518138    9036 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:54:44.518193    9036 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-423000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:54:44.518269    9036 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:54:44.531867    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:54:44.531886    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:54:44.531891    9036 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:54:44.531899    9036 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-423000 NodeName:stopped-upgrade-423000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:54:44.531973    9036 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-423000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:54:44.532035    9036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:54:44.535407    9036 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:54:44.535434    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:54:44.538345    9036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:54:44.543372    9036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:54:44.548342    9036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:54:44.553745    9036 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:54:44.554927    9036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:54:44.558645    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:44.634949    9036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:54:44.641514    9036 certs.go:68] Setting up /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000 for IP: 10.0.2.15
	I0920 10:54:44.641528    9036 certs.go:194] generating shared ca certs ...
	I0920 10:54:44.641538    9036 certs.go:226] acquiring lock for ca certs: {Name:mkeda31d83c21edf6ebc3767ef11bc03f6f18a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.641714    9036 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key
	I0920 10:54:44.641766    9036 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key
	I0920 10:54:44.641772    9036 certs.go:256] generating profile certs ...
	I0920 10:54:44.641849    9036 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key
	I0920 10:54:44.641867    9036 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81
	I0920 10:54:44.641877    9036 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:54:44.813213    9036 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 ...
	I0920 10:54:44.813227    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81: {Name:mk907fabc7f6e8ab3ba7b6f06cfcdc116f1a9698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.813574    9036 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 ...
	I0920 10:54:44.813578    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81: {Name:mkc3da0abb71653cc5ab3b57f0e66ae346ec6554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.813713    9036 certs.go:381] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt
	I0920 10:54:44.813860    9036 certs.go:385] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key
	I0920 10:54:44.814023    9036 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.key
	I0920 10:54:44.814160    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem (1338 bytes)
	W0920 10:54:44.814189    9036 certs.go:480] ignoring /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191_empty.pem, impossibly tiny 0 bytes
	I0920 10:54:44.814194    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:54:44.814224    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:54:44.814243    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:54:44.814262    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem (1675 bytes)
	I0920 10:54:44.814302    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:54:44.814632    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:54:44.821860    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:54:44.828317    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:54:44.835534    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:54:44.842831    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:54:44.849778    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 10:54:44.856285    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:54:44.863504    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:54:44.871014    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem --> /usr/share/ca-certificates/7191.pem (1338 bytes)
	I0920 10:54:44.878094    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /usr/share/ca-certificates/71912.pem (1708 bytes)
	I0920 10:54:44.884684    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:54:44.891792    9036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:54:44.896942    9036 ssh_runner.go:195] Run: openssl version
	I0920 10:54:44.898790    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7191.pem && ln -fs /usr/share/ca-certificates/7191.pem /etc/ssl/certs/7191.pem"
	I0920 10:54:44.901719    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.903148    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:39 /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.903171    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.905435    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7191.pem /etc/ssl/certs/51391683.0"
	I0920 10:54:44.908456    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71912.pem && ln -fs /usr/share/ca-certificates/71912.pem /etc/ssl/certs/71912.pem"
	I0920 10:54:44.911800    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.913317    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:39 /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.913335    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.914900    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71912.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:54:44.917583    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:54:44.920530    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.921857    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:50 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.921882    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.923567    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:54:44.926782    9036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:54:44.928283    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:54:44.930148    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:54:44.931905    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:54:44.933720    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:54:44.935477    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:54:44.937278    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:54:44.938981    9036 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:44.939062    9036 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:54:44.948942    9036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:54:44.952379    9036 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:54:44.952391    9036 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:54:44.952420    9036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:54:44.956072    9036 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:54:44.956391    9036 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-423000" does not appear in /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:54:44.956486    9036 kubeconfig.go:62] /Users/jenkins/minikube-integration/19678-6679/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-423000" cluster setting kubeconfig missing "stopped-upgrade-423000" context setting]
	I0920 10:54:44.956676    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.957074    9036 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:54:44.957412    9036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:54:44.960272    9036 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-423000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:54:44.960277    9036 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:54:44.960322    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:54:44.970871    9036 docker.go:483] Stopping containers: [679ec37c5db9 bbc78c4773e8 aceabc06111c 1619d098154d 3f14f3112347 1fca3ed6d070 61a375dec486 650308392c15]
	I0920 10:54:44.970954    9036 ssh_runner.go:195] Run: docker stop 679ec37c5db9 bbc78c4773e8 aceabc06111c 1619d098154d 3f14f3112347 1fca3ed6d070 61a375dec486 650308392c15
	I0920 10:54:44.981684    9036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:54:44.987353    9036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:54:44.990240    9036 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:54:44.990246    9036 kubeadm.go:157] found existing configuration files:
	
	I0920 10:54:44.990272    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf
	I0920 10:54:44.992754    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:54:44.992778    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:54:44.995992    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf
	I0920 10:54:44.999002    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:54:44.999033    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:54:45.001555    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf
	I0920 10:54:45.004100    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:54:45.004127    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:54:45.007141    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf
	I0920 10:54:45.009768    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:54:45.009791    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:54:45.012203    9036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:54:45.015289    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.037030    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:47.210041    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:47.210148    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:47.222234    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:47.222319    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:47.238167    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:47.238251    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:47.252657    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:47.252738    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:47.292204    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:47.292294    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:47.307354    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:47.307444    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:47.319361    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:47.319443    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:47.331593    8893 logs.go:276] 0 containers: []
	W0920 10:54:47.331607    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:47.331684    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:47.342544    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:47.342562    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:47.342569    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:47.354236    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:47.354248    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:47.370842    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:47.370854    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:47.383717    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:47.383728    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:47.396532    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:47.396546    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:47.415004    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:47.415016    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:47.427004    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:47.427015    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:47.438572    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:47.438583    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:47.452692    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:47.452707    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:47.464751    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:47.464764    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:47.476659    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:47.476670    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:47.501237    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:47.501251    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:47.538127    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:47.538138    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:47.542432    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:47.542438    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:47.580378    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:47.580391    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:47.594818    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:47.594829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:47.612169    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:47.612179    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:45.769596    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.896634    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.919693    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.940445    9036 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:54:45.940530    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.442387    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.942568    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.947130    9036 api_server.go:72] duration metric: took 1.006691833s to wait for apiserver process to appear ...
	I0920 10:54:46.947140    9036 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:54:46.947150    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:50.152295    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:51.948658    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:51.948734    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:55.154308    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:55.154533    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:55.172856    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:54:55.172967    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:55.186751    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:54:55.186831    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:55.198249    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:54:55.198336    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:55.208961    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:54:55.209041    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:55.219518    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:54:55.219604    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:55.230468    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:54:55.230550    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:55.240343    8893 logs.go:276] 0 containers: []
	W0920 10:54:55.240355    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:55.240427    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:55.251235    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:54:55.251253    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:55.251260    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:55.288255    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:54:55.288267    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:54:55.302669    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:54:55.302683    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:54:55.313411    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:54:55.313422    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:54:55.331092    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:54:55.331101    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:54:55.345715    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:54:55.345730    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:54:55.360925    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:54:55.360937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:54:55.377034    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:55.377043    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:55.401432    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:54:55.401447    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:55.413317    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:55.413330    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:55.417463    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:54:55.417470    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:54:55.454973    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:54:55.454983    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:54:55.472282    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:54:55.472297    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:54:55.490510    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:54:55.490520    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:54:55.502771    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:55.502787    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:55.540280    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:54:55.540287    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:54:55.552124    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:54:55.552138    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:54:56.949328    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:56.949390    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:58.065574    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:01.949795    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:01.949838    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:03.067758    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:03.067888    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:03.079384    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:03.079476    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:03.091303    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:03.091387    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:03.102637    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:03.102717    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:03.115691    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:03.115782    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:03.127394    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:03.127482    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:03.137855    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:03.137937    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:03.147598    8893 logs.go:276] 0 containers: []
	W0920 10:55:03.147612    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:03.147692    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:03.158175    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:03.158195    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:03.158200    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:03.197245    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:03.197260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:03.212607    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:03.212619    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:03.224557    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:03.224568    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:03.236099    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:03.236113    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:03.250932    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:03.250943    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:03.268701    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:03.268717    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:03.284170    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:03.284188    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:03.297291    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:03.297304    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:03.319860    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:03.319873    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:03.331875    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:03.331886    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:03.354697    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:03.354705    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:03.358771    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:03.358777    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:03.403490    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:03.403505    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:03.415632    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:03.415644    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:03.427700    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:03.427711    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:03.464958    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:03.464972    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:05.979537    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:06.950526    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:06.950586    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:10.981912    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:10.982151    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:11.004264    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:11.004365    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:11.018668    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:11.018763    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:11.037106    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:11.037180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:11.054587    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:11.054672    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:11.065019    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:11.065096    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:11.075570    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:11.075641    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:11.085996    8893 logs.go:276] 0 containers: []
	W0920 10:55:11.086007    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:11.086083    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:11.096880    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:11.096900    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:11.096907    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:11.101540    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:11.101549    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:11.115293    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:11.115303    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:11.152122    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:11.152132    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:11.163485    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:11.163497    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:11.176413    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:11.176425    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:11.199336    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:11.199343    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:11.213290    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:11.213301    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:11.225721    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:11.225736    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:11.262532    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:11.262551    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:11.277574    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:11.277588    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:11.290566    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:11.290578    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:11.302929    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:11.302945    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:11.337347    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:11.337360    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:11.355652    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:11.355667    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:11.377677    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:11.377693    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:11.389838    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:11.389850    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:11.951274    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:11.951318    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:13.912465    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:16.952378    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:16.952509    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:18.913696    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:18.913894    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:18.932242    8893 logs.go:276] 2 containers: [5ba4f91b1e3f 7580bd5f450d]
	I0920 10:55:18.932347    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:18.945609    8893 logs.go:276] 2 containers: [fe9a198ead17 dc2862ede330]
	I0920 10:55:18.945703    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:18.957348    8893 logs.go:276] 1 containers: [30c75757aaeb]
	I0920 10:55:18.957421    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:18.969324    8893 logs.go:276] 2 containers: [401533cfff5d 26792109aa9e]
	I0920 10:55:18.969401    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:18.980364    8893 logs.go:276] 1 containers: [698d7a7a4fab]
	I0920 10:55:18.980448    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:18.991887    8893 logs.go:276] 2 containers: [36a9a489792d 30d9b98333f5]
	I0920 10:55:18.991969    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:19.002923    8893 logs.go:276] 0 containers: []
	W0920 10:55:19.002934    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:19.003005    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:19.013757    8893 logs.go:276] 2 containers: [9335c378f8ce 1b3b77f5a6ce]
	I0920 10:55:19.013774    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:19.013779    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:19.050073    8893 logs.go:123] Gathering logs for etcd [fe9a198ead17] ...
	I0920 10:55:19.050081    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe9a198ead17"
	I0920 10:55:19.063990    8893 logs.go:123] Gathering logs for etcd [dc2862ede330] ...
	I0920 10:55:19.064003    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc2862ede330"
	I0920 10:55:19.081628    8893 logs.go:123] Gathering logs for kube-controller-manager [36a9a489792d] ...
	I0920 10:55:19.081638    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a9a489792d"
	I0920 10:55:19.099410    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:19.099425    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:19.122974    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:19.122982    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:19.127203    8893 logs.go:123] Gathering logs for kube-apiserver [7580bd5f450d] ...
	I0920 10:55:19.127210    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7580bd5f450d"
	I0920 10:55:19.164924    8893 logs.go:123] Gathering logs for coredns [30c75757aaeb] ...
	I0920 10:55:19.164935    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30c75757aaeb"
	I0920 10:55:19.176565    8893 logs.go:123] Gathering logs for storage-provisioner [9335c378f8ce] ...
	I0920 10:55:19.176578    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335c378f8ce"
	I0920 10:55:19.187984    8893 logs.go:123] Gathering logs for kube-proxy [698d7a7a4fab] ...
	I0920 10:55:19.187995    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 698d7a7a4fab"
	I0920 10:55:19.199739    8893 logs.go:123] Gathering logs for storage-provisioner [1b3b77f5a6ce] ...
	I0920 10:55:19.199750    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b3b77f5a6ce"
	I0920 10:55:19.219850    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:19.219860    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:19.257180    8893 logs.go:123] Gathering logs for kube-apiserver [5ba4f91b1e3f] ...
	I0920 10:55:19.257195    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ba4f91b1e3f"
	I0920 10:55:19.275476    8893 logs.go:123] Gathering logs for kube-scheduler [401533cfff5d] ...
	I0920 10:55:19.275491    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401533cfff5d"
	I0920 10:55:19.287484    8893 logs.go:123] Gathering logs for kube-scheduler [26792109aa9e] ...
	I0920 10:55:19.287498    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26792109aa9e"
	I0920 10:55:19.303168    8893 logs.go:123] Gathering logs for kube-controller-manager [30d9b98333f5] ...
	I0920 10:55:19.303182    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d9b98333f5"
	I0920 10:55:19.315967    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:55:19.315978    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:21.831050    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:21.954063    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:21.954100    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:26.833520    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:26.833691    8893 kubeadm.go:597] duration metric: took 4m4.589419334s to restartPrimaryControlPlane
	W0920 10:55:26.833829    8893 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:55:26.833880    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:55:27.858521    8893 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.024631208s)
	I0920 10:55:27.858586    8893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:55:27.863615    8893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:55:27.866497    8893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:55:27.869055    8893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:55:27.869060    8893 kubeadm.go:157] found existing configuration files:
	
	I0920 10:55:27.869084    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:55:27.871769    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:55:27.871798    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:55:27.874492    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:55:27.877177    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:55:27.877198    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:55:27.880753    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:55:27.883737    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:55:27.883760    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:55:27.886281    8893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:55:27.888984    8893 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:55:27.889012    8893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:55:27.892143    8893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:55:27.909338    8893 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:55:27.909374    8893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:55:27.958532    8893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:55:27.958589    8893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:55:27.958646    8893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:55:28.008529    8893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:55:28.011640    8893 out.go:235]   - Generating certificates and keys ...
	I0920 10:55:28.011683    8893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:55:28.011719    8893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:55:28.011762    8893 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:55:28.011792    8893 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:55:28.011835    8893 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:55:28.011864    8893 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:55:28.011897    8893 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:55:28.011928    8893 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:55:28.011965    8893 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:55:28.012003    8893 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:55:28.012022    8893 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:55:28.012052    8893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:55:28.129163    8893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:55:28.180658    8893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:55:28.275368    8893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:55:28.413015    8893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:55:28.441925    8893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:55:28.442227    8893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:55:28.442287    8893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:55:28.536782    8893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:55:26.955705    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:26.955747    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:28.544905    8893 out.go:235]   - Booting up control plane ...
	I0920 10:55:28.544955    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:55:28.544990    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:55:28.545020    8893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:55:28.545054    8893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:55:28.545159    8893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:55:33.042695    8893 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501714 seconds
	I0920 10:55:33.042802    8893 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:55:33.047375    8893 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:55:33.558667    8893 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:55:33.558829    8893 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-568000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:55:34.061934    8893 kubeadm.go:310] [bootstrap-token] Using token: m87ix1.kgyx5cadz2riz65a
	I0920 10:55:34.066069    8893 out.go:235]   - Configuring RBAC rules ...
	I0920 10:55:34.066130    8893 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:55:34.068496    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:55:34.073671    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:55:34.074454    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:55:34.075232    8893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:55:34.076115    8893 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:55:34.079984    8893 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:55:34.253905    8893 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:55:34.470388    8893 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:55:34.471191    8893 kubeadm.go:310] 
	I0920 10:55:34.471224    8893 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:55:34.471227    8893 kubeadm.go:310] 
	I0920 10:55:34.471280    8893 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:55:34.471284    8893 kubeadm.go:310] 
	I0920 10:55:34.471296    8893 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:55:34.471328    8893 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:55:34.471357    8893 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:55:34.471362    8893 kubeadm.go:310] 
	I0920 10:55:34.471398    8893 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:55:34.471403    8893 kubeadm.go:310] 
	I0920 10:55:34.471425    8893 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:55:34.471429    8893 kubeadm.go:310] 
	I0920 10:55:34.471514    8893 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:55:34.471618    8893 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:55:34.471747    8893 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:55:34.471754    8893 kubeadm.go:310] 
	I0920 10:55:34.471828    8893 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:55:34.471865    8893 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:55:34.471868    8893 kubeadm.go:310] 
	I0920 10:55:34.471908    8893 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m87ix1.kgyx5cadz2riz65a \
	I0920 10:55:34.471989    8893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa \
	I0920 10:55:34.472006    8893 kubeadm.go:310] 	--control-plane 
	I0920 10:55:34.472011    8893 kubeadm.go:310] 
	I0920 10:55:34.472064    8893 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:55:34.472071    8893 kubeadm.go:310] 
	I0920 10:55:34.472120    8893 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m87ix1.kgyx5cadz2riz65a \
	I0920 10:55:34.472280    8893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa 
	I0920 10:55:34.472351    8893 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:55:34.472360    8893 cni.go:84] Creating CNI manager for ""
	I0920 10:55:34.472369    8893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:55:34.476079    8893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:55:34.486131    8893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:55:34.489828    8893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:55:34.495414    8893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:55:34.495481    8893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-568000 minikube.k8s.io/updated_at=2024_09_20T10_55_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=running-upgrade-568000 minikube.k8s.io/primary=true
	I0920 10:55:34.495483    8893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:55:34.544222    8893 kubeadm.go:1113] duration metric: took 48.801042ms to wait for elevateKubeSystemPrivileges
	I0920 10:55:34.544264    8893 ops.go:34] apiserver oom_adj: -16
	I0920 10:55:34.545652    8893 kubeadm.go:394] duration metric: took 4m12.315917208s to StartCluster
	I0920 10:55:34.545667    8893 settings.go:142] acquiring lock: {Name:mk5f352888690de611711a90a16fd3b08e6afbf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:34.545828    8893 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:55:34.546214    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:34.546431    8893 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:55:34.546440    8893 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:55:34.546469    8893 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-568000"
	I0920 10:55:34.546478    8893 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-568000"
	W0920 10:55:34.546481    8893 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:55:34.546502    8893 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0920 10:55:34.546504    8893 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:55:34.546509    8893 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-568000"
	I0920 10:55:34.546520    8893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-568000"
	I0920 10:55:34.547353    8893 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eae030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:55:34.547484    8893 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-568000"
	W0920 10:55:34.547489    8893 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:55:34.547496    8893 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0920 10:55:34.551140    8893 out.go:177] * Verifying Kubernetes components...
	I0920 10:55:34.551449    8893 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:55:34.555460    8893 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:55:34.555483    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:55:34.559027    8893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:55:31.957691    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:31.957726    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:34.563209    8893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:55:34.567211    8893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:55:34.567219    8893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:55:34.567227    8893 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0920 10:55:34.635441    8893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:55:34.640503    8893 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:55:34.640552    8893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:55:34.644921    8893 api_server.go:72] duration metric: took 98.4795ms to wait for apiserver process to appear ...
	I0920 10:55:34.644929    8893 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:55:34.644936    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:34.661487    8893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:55:34.701014    8893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:55:35.018489    8893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:55:35.018499    8893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:55:36.959969    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:36.960011    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:39.647030    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:39.647076    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:41.962320    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:41.962359    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:44.647427    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:44.647474    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:46.963633    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:46.963833    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:46.976813    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:55:46.976914    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:46.988423    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:55:46.988511    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:46.999483    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:55:46.999576    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:47.015155    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:55:47.015226    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:47.026271    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:55:47.026346    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:47.038094    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:55:47.038172    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:47.048526    9036 logs.go:276] 0 containers: []
	W0920 10:55:47.048544    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:47.048605    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:47.059675    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:55:47.059693    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:55:47.059698    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:55:47.071198    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:55:47.071208    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:55:47.089983    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:55:47.090000    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:47.103053    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:47.103065    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:47.142321    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:47.142339    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:47.251450    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:55:47.251467    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:55:47.269253    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:55:47.269265    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:55:47.284401    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:55:47.284414    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:55:47.297843    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:47.297856    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:47.302150    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:55:47.302156    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:55:47.312979    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:47.312989    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:47.339493    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:55:47.339507    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:55:47.351404    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:55:47.351414    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:55:47.392904    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:55:47.392920    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:55:47.406984    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:55:47.407000    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:55:47.423560    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:55:47.423571    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:55:47.436986    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:55:47.436998    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:55:49.951833    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:49.647813    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:49.647844    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:54.953164    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:54.953375    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:54.973792    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:55:54.973899    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:54.988383    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:55:54.988476    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:55.000261    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:55:55.000331    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:55.014933    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:55:55.015029    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:55.025661    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:55:55.025739    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:55.036010    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:55:55.036101    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:55.046309    9036 logs.go:276] 0 containers: []
	W0920 10:55:55.046320    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:55.046384    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:55.056765    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:55:55.056784    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:55.056790    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:55.097249    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:55:55.097259    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:55:55.135456    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:55:55.135468    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:55:55.146858    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:55:55.146870    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:55:55.161944    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:55:55.161958    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:55:55.173369    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:55:55.173380    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:55:55.187186    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:55:55.187195    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:55:55.201765    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:55:55.201779    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:55:55.215640    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:55:55.215653    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:55:55.233298    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:55:55.233309    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:55:55.248574    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:55:55.248584    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:55.260577    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:55.260588    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:55.297439    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:55.297448    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:55.301524    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:55:55.301530    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:55:55.315374    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:55:55.315382    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:55:55.328830    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:55:55.328845    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:55:55.349654    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:55.349666    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:54.648252    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:54.648277    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:57.875649    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:59.648850    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:59.648872    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:04.649586    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:04.649623    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:56:05.020754    8893 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:56:05.025067    8893 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:56:02.877867    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:02.878114    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:02.901941    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:02.902045    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:02.915992    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:02.916086    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:02.926940    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:02.927022    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:02.937091    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:02.937180    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:02.947517    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:02.947589    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:02.958095    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:02.958176    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:02.968401    9036 logs.go:276] 0 containers: []
	W0920 10:56:02.968414    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:02.968489    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:02.978913    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:02.978929    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:02.978936    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:03.017670    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:03.017683    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:03.034048    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:03.034059    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:03.049488    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:03.049500    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:03.064665    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:03.064676    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:03.077879    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:03.077890    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:03.093851    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:03.093865    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:03.132082    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:03.132090    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:03.150013    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:03.150023    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:03.161318    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:03.161329    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:03.186472    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:03.186482    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:03.202294    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:03.202304    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:03.214145    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:03.214156    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:03.218341    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:03.218350    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:03.253810    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:03.253822    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:03.273351    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:03.273362    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:03.291861    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:03.291870    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:05.036004    8893 addons.go:510] duration metric: took 30.489724042s for enable addons: enabled=[storage-provisioner]
	I0920 10:56:05.805840    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:09.650572    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:09.650619    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:10.808487    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:10.808713    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:10.825324    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:10.825424    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:10.838040    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:10.838124    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:10.850222    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:10.850306    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:10.860762    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:10.860855    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:10.872058    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:10.872133    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:10.882972    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:10.883055    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:10.893479    9036 logs.go:276] 0 containers: []
	W0920 10:56:10.893493    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:10.893565    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:10.903686    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:10.903708    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:10.903715    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:10.917788    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:10.917799    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:10.931394    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:10.931409    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:10.945902    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:10.945911    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:10.957135    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:10.957146    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:10.995959    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:10.995975    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:11.006868    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:11.006878    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:11.011175    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:11.011185    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:11.049080    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:11.049091    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:11.061368    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:11.061379    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:11.072742    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:11.072754    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:11.097871    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:11.097887    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:11.111746    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:11.111755    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:11.150117    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:11.150146    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:11.169025    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:11.169039    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:11.180603    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:11.180612    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:11.197814    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:11.197824    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:13.713995    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:14.652001    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:14.652061    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:18.716446    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:18.716714    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:18.737630    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:18.737754    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:18.751852    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:18.751943    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:18.764019    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:18.764100    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:18.774524    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:18.774612    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:18.793638    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:18.793724    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:18.804309    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:18.804388    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:18.814728    9036 logs.go:276] 0 containers: []
	W0920 10:56:18.814740    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:18.814811    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:18.824999    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:18.825021    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:18.825026    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:18.861421    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:18.861435    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:18.872577    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:18.872587    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:18.889801    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:18.889813    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:18.912820    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:18.912828    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:18.949030    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:18.949039    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:18.962913    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:18.962927    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:18.974208    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:18.974216    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:18.991571    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:18.991581    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:18.995777    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:18.995784    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:19.036980    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:19.037006    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:19.048411    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:19.048422    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:19.059777    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:19.059791    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:19.072135    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:19.072148    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:19.087891    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:19.087900    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:19.104478    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:19.104489    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:19.119117    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:19.119131    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:19.653583    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:19.653608    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:21.634605    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:24.653862    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:24.653912    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:26.636855    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:26.637036    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:26.654329    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:26.654429    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:26.670636    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:26.670719    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:26.691096    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:26.691176    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:26.701293    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:26.701374    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:26.711872    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:26.711951    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:26.722879    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:26.722957    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:26.738466    9036 logs.go:276] 0 containers: []
	W0920 10:56:26.738478    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:26.738543    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:26.749760    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:26.749780    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:26.749788    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:26.774534    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:26.774541    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:26.786160    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:26.786172    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:26.797617    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:26.797627    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:26.808470    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:26.808481    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:26.820090    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:26.820099    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:26.857600    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:26.857611    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:26.872840    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:26.872850    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:26.907533    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:26.907545    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:26.922063    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:26.922078    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:26.940573    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:26.940584    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:26.955832    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:26.955843    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:26.973707    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:26.973717    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:27.012761    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:27.012769    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:27.017039    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:27.017045    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:27.030390    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:27.030400    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:27.044480    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:27.044489    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:29.557961    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:29.655920    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:29.655959    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:34.560286    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:34.560492    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:34.573646    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:34.573744    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:34.584483    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:34.584567    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:34.595064    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:34.595150    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:34.605520    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:34.605603    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:34.615802    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:34.615891    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:34.627774    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:34.627865    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:34.640062    9036 logs.go:276] 0 containers: []
	W0920 10:56:34.640078    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:34.640154    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:34.656540    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:34.656559    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:34.656565    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:34.672998    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:34.673010    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:34.697780    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:34.697794    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:34.702393    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:34.702409    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:34.723496    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:34.723508    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:34.743928    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:34.743940    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:34.757235    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:34.757247    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:34.769861    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:34.769872    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:34.784006    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:34.784018    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:34.798528    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:34.798542    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:34.811270    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:34.811282    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:34.851209    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:34.851221    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:34.865301    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:34.865312    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:34.888924    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:34.888934    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:34.902968    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:34.902979    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:34.942554    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:34.942565    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:34.980940    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:34.980953    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:34.658170    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:34.658276    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:34.669851    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:34.669949    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:34.681453    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:34.681541    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:34.692244    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:34.692331    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:34.703067    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:34.703149    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:34.714895    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:34.714994    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:34.726321    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:34.726405    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:34.737227    8893 logs.go:276] 0 containers: []
	W0920 10:56:34.737239    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:34.737331    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:34.748883    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:34.748900    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:34.748907    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:34.786262    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:34.786272    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:34.802291    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:34.802303    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:34.817870    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:34.817888    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:34.830460    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:34.830472    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:34.842528    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:34.842539    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:34.868179    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:34.868190    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:34.880456    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:34.880472    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:34.885402    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:34.885410    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:34.924466    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:34.924482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:34.940942    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:34.940953    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:34.953263    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:34.953275    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:34.971942    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:34.971962    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:37.488598    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:37.495267    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:42.490948    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:42.491180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:42.506590    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:42.506690    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:42.519245    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:42.519329    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:42.533093    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:42.533179    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:42.544286    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:42.544364    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:42.555470    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:42.555552    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:42.567049    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:42.567130    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:42.588399    8893 logs.go:276] 0 containers: []
	W0920 10:56:42.588410    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:42.588473    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:42.599331    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:42.599348    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:42.599355    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:42.615974    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:42.615991    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:42.641709    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:42.641721    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:42.679487    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:42.679500    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:42.692940    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:42.692953    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:42.705000    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:42.705012    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:42.719332    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:42.719347    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:42.732005    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:42.732016    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:42.754118    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:42.754131    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:42.766967    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:42.766979    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:42.779629    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:42.779642    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:42.815104    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:42.815116    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:42.820047    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:42.820058    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:42.497518    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:42.497655    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:42.511644    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:42.511738    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:42.523586    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:42.523669    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:42.534379    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:42.534438    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:42.547239    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:42.547317    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:42.558673    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:42.558748    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:42.571721    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:42.571807    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:42.582978    9036 logs.go:276] 0 containers: []
	W0920 10:56:42.582990    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:42.583056    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:42.593979    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:42.593999    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:42.594007    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:42.608870    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:42.608885    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:42.621677    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:42.621691    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:42.638138    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:42.638149    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:42.656514    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:42.656532    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:42.671838    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:42.671855    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:42.687595    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:42.687607    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:42.714690    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:42.714713    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:42.727735    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:42.727748    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:42.768706    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:42.768718    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:42.773490    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:42.773502    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:42.788916    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:42.788927    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:42.802628    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:42.802643    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:42.815015    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:42.815026    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:42.852640    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:42.852651    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:42.890354    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:42.890366    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:42.901599    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:42.901609    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:45.414969    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:45.341294    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:50.417189    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:50.417282    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:50.428944    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:50.429042    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:50.440044    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:50.440128    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:50.452186    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:50.452266    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:50.463678    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:50.463764    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:50.475029    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:50.475108    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:50.486417    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:50.486498    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:50.497559    9036 logs.go:276] 0 containers: []
	W0920 10:56:50.497570    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:50.497642    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:50.510756    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:50.510773    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:50.510778    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:50.527606    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:50.527615    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:50.540305    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:50.540317    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:50.552924    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:50.552933    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:50.591583    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:50.591593    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:50.632485    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:50.632494    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:50.343633    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:50.343841    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:50.363987    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:50.364095    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:50.376222    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:50.376301    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:50.388797    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:50.388883    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:50.399090    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:50.399168    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:50.409749    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:50.409833    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:50.420549    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:50.420628    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:50.432508    8893 logs.go:276] 0 containers: []
	W0920 10:56:50.432522    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:50.432588    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:50.443989    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:50.444005    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:50.444011    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:50.482465    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:50.482478    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:50.497915    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:50.497925    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:50.512978    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:50.512988    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:56:50.525442    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:50.525458    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:50.550805    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:50.550828    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:50.564447    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:50.564462    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:50.576678    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:50.576688    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:50.612545    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:50.612564    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:50.617880    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:50.617891    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:50.631245    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:50.631257    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:50.647430    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:50.647447    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:50.660637    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:50.660653    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:50.674828    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:50.674840    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:50.689280    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:50.689297    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:50.702541    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:50.702552    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:50.725495    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:50.725502    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:50.729341    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:50.729347    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:50.740562    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:50.740576    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:50.753105    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:50.753117    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:50.764945    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:50.764955    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:50.793082    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:50.793092    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:50.816345    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:50.816358    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:50.833426    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:50.833437    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:53.347732    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:53.180813    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:58.350084    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:58.350186    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:58.361839    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:58.361921    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:58.372794    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:58.372879    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:58.384342    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:58.384424    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:58.396434    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:58.396522    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:58.410489    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:58.410570    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:58.421835    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:58.421921    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:58.432943    9036 logs.go:276] 0 containers: []
	W0920 10:56:58.432955    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:58.433033    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:58.446380    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:58.446399    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:58.446406    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:58.450756    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:58.450766    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:58.466282    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:58.466293    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:58.504688    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:58.504704    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:58.519849    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:58.519862    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:58.534913    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:58.534928    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:58.557433    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:58.557440    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:58.571031    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:58.571041    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:58.607678    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:58.607686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:58.642279    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:58.642292    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:58.653681    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:58.653693    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:58.665312    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:58.665325    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:58.677205    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:58.677215    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:58.694677    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:58.694687    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:58.708452    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:58.708468    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:58.724030    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:58.724042    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:58.743248    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:58.743261    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:58.183161    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:58.183449    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:58.203476    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:56:58.203595    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:58.217782    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:56:58.217874    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:58.230582    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:56:58.230666    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:58.241436    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:56:58.241512    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:58.252006    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:56:58.252093    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:58.262800    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:56:58.262872    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:58.272716    8893 logs.go:276] 0 containers: []
	W0920 10:56:58.272731    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:58.272799    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:58.284915    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:56:58.284931    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:56:58.284937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:56:58.303237    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:56:58.303252    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:56:58.315303    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:58.315313    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:58.339005    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:58.339019    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:58.344717    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:58.344726    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:58.385317    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:56:58.385326    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:56:58.400162    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:56:58.400175    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:56:58.412818    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:56:58.412829    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:56:58.429310    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:56:58.429325    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:56:58.442281    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:56:58.442293    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:58.454720    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:58.454734    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:58.490500    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:56:58.490514    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:56:58.507982    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:56:58.507993    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:01.022738    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:01.267928    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:06.024970    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:06.025184    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:06.038390    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:06.038483    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:06.049569    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:06.049659    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:06.060301    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:06.060382    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:06.070710    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:06.070788    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:06.081018    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:06.081104    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:06.095146    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:06.095234    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:06.105979    8893 logs.go:276] 0 containers: []
	W0920 10:57:06.105990    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:06.106065    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:06.116814    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:06.116826    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:06.116831    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:06.128709    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:06.128723    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:06.140400    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:06.140414    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:06.163594    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:06.163602    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:06.175992    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:06.176008    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:06.210549    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:06.210560    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:06.226708    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:06.226718    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:06.240964    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:06.240974    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:06.252528    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:06.252538    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:06.263954    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:06.263964    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:06.280256    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:06.280273    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:06.299187    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:06.299202    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:06.304291    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:06.304303    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:06.270207    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:06.270307    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:06.283006    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:06.283095    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:06.294983    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:06.295069    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:06.307379    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:06.307467    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:06.318637    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:06.318721    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:06.329913    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:06.330001    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:06.342976    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:06.343066    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:06.355025    9036 logs.go:276] 0 containers: []
	W0920 10:57:06.355038    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:06.355113    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:06.370880    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:06.370897    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:06.370902    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:06.386633    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:06.386646    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:06.398648    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:06.398663    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:06.422206    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:06.422213    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:06.440655    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:06.440667    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:06.476273    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:06.476287    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:06.491119    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:06.491130    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:06.502825    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:06.502836    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:06.514153    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:06.514166    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:06.527531    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:06.527542    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:06.539808    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:06.539823    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:06.576115    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:06.576125    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:06.580645    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:06.580654    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:06.619087    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:06.619104    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:06.639586    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:06.639596    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:06.653183    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:06.653193    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:06.664779    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:06.664789    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:09.178011    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:08.848138    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:14.180250    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:14.180371    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:14.191845    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:14.191931    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:14.203152    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:14.203234    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:14.214497    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:14.214590    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:14.225320    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:14.225403    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:14.236252    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:14.236339    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:14.246821    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:14.246905    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:14.258888    9036 logs.go:276] 0 containers: []
	W0920 10:57:14.258903    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:14.258976    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:14.269155    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:14.269175    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:14.269182    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:14.281415    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:14.281426    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:14.293348    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:14.293364    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:14.307753    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:14.307763    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:14.322096    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:14.322110    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:14.339419    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:14.339429    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:14.375962    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:14.375973    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:14.390432    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:14.390442    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:14.428029    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:14.428046    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:14.439464    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:14.439476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:14.451724    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:14.451736    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:14.466658    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:14.466672    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:14.506281    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:14.506293    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:14.510401    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:14.510410    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:14.532789    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:14.532798    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:14.544669    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:14.544680    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:14.560635    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:14.560647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:13.850764    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:13.851246    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:13.881435    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:13.881592    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:13.900201    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:13.900314    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:13.914727    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:13.914813    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:13.926850    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:13.926934    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:13.937685    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:13.937774    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:13.952248    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:13.952336    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:13.962354    8893 logs.go:276] 0 containers: []
	W0920 10:57:13.962371    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:13.962441    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:13.972894    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:13.972913    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:13.972918    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:13.985271    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:13.985282    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:14.001153    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:14.001163    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:14.029987    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:14.030002    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:14.041336    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:14.041346    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:14.053647    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:14.053663    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:14.058265    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:14.058275    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:14.072803    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:14.072818    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:14.086461    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:14.086470    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:14.099052    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:14.099064    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:14.122614    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:14.122621    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:14.155209    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:14.155219    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:14.192139    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:14.192148    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:16.706647    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:17.075397    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:21.709012    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:21.709298    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:21.734327    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:21.734454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:21.750072    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:21.750153    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:21.762462    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:21.762538    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:21.773385    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:21.773458    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:21.784146    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:21.784216    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:21.801707    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:21.801799    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:21.817463    8893 logs.go:276] 0 containers: []
	W0920 10:57:21.817478    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:21.817554    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:21.830797    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:21.830812    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:21.830818    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:21.845113    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:21.845126    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:21.859428    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:21.859443    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:21.871727    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:21.871738    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:21.889556    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:21.889568    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:21.901457    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:21.901471    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:21.918788    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:21.918798    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:21.923209    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:21.923216    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:21.994505    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:21.994515    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:22.006428    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:22.006438    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:22.018655    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:22.018670    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:22.043838    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:22.043855    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:22.055106    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:22.055119    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:22.076830    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:22.076935    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:22.093772    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:22.093861    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:22.104819    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:22.104900    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:22.115577    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:22.115649    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:22.125867    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:22.125937    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:22.136018    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:22.136089    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:22.147946    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:22.148028    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:22.157839    9036 logs.go:276] 0 containers: []
	W0920 10:57:22.157857    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:22.157923    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:22.168478    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:22.168496    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:22.168502    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:22.173035    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:22.173040    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:22.187516    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:22.187525    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:22.202292    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:22.202302    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:22.213359    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:22.213373    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:22.227697    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:22.227706    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:22.241106    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:22.241116    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:22.252728    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:22.252739    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:22.289051    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:22.289062    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:22.326976    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:22.326988    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:22.342759    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:22.342773    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:22.356225    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:22.356240    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:22.367678    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:22.367688    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:22.404486    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:22.404494    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:22.418100    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:22.418109    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:22.430212    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:22.430223    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:22.447366    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:22.447377    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:24.973336    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:24.592035    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:29.975647    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:29.975747    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:29.987633    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:29.987716    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:29.998081    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:29.998174    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:30.012523    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:30.012610    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:30.023145    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:30.023231    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:30.034098    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:30.034182    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:30.045454    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:30.045536    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:30.055753    9036 logs.go:276] 0 containers: []
	W0920 10:57:30.055764    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:30.055833    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:30.066346    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:30.066366    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:30.066372    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:30.101634    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:30.101647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:30.115459    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:30.115473    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:30.130981    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:30.130992    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:30.154457    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:30.154464    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:30.166044    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:30.166054    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:30.170480    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:30.170486    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:30.208519    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:30.208529    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:30.225871    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:30.225880    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:30.239027    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:30.239037    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:30.253208    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:30.253218    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:30.267927    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:30.267937    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:30.278804    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:30.278815    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:30.294438    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:30.294449    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:30.332981    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:30.332988    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:30.345269    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:30.345283    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:30.357349    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:30.357360    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:29.592969    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:29.593280    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:29.625115    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:29.625224    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:29.640851    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:29.640934    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:29.652342    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:29.652419    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:29.662385    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:29.662473    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:29.676746    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:29.676830    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:29.687477    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:29.687557    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:29.697996    8893 logs.go:276] 0 containers: []
	W0920 10:57:29.698008    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:29.698082    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:29.708738    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:29.708757    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:29.708763    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:29.741957    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:29.741965    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:29.778184    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:29.778194    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:29.792873    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:29.792886    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:29.806718    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:29.806730    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:29.819066    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:29.819076    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:29.831226    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:29.831234    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:29.835786    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:29.835795    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:29.847670    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:29.847685    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:29.863283    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:29.863292    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:29.881216    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:29.881229    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:29.896987    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:29.896997    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:29.919858    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:29.919866    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:32.432965    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:32.871588    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:37.435201    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:37.435427    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:37.452032    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:37.452131    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:37.464408    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:37.464495    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:37.474769    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:37.474846    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:37.489133    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:37.489207    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:37.499866    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:37.499944    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:37.510700    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:37.510770    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:37.521369    8893 logs.go:276] 0 containers: []
	W0920 10:57:37.521381    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:37.521446    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:37.536800    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:37.536817    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:37.536822    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:37.541316    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:37.541324    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:37.555538    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:37.555553    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:37.569379    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:37.569390    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:37.581443    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:37.581454    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:37.596308    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:37.596317    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:37.613934    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:37.613945    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:37.625647    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:37.625658    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:37.658054    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:37.658063    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:37.669753    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:37.669765    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:37.681204    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:37.681214    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:37.706024    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:37.706035    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:37.719135    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:37.719148    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:37.873844    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:37.873963    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:37.886718    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:37.886802    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:37.897655    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:37.897737    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:37.908281    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:37.908372    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:37.918502    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:37.918583    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:37.928726    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:37.928809    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:37.940208    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:37.940297    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:37.950149    9036 logs.go:276] 0 containers: []
	W0920 10:57:37.950160    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:37.950231    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:37.960478    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:37.960497    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:37.960502    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:37.971886    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:37.971901    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:37.985261    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:37.985271    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:37.999199    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:37.999212    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:38.013125    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:38.013136    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:38.027474    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:38.027484    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:38.045721    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:38.045732    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:38.057166    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:38.057177    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:38.079611    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:38.079618    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:38.091302    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:38.091316    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:38.126800    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:38.126815    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:38.165023    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:38.165034    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:38.176724    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:38.176732    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:38.188135    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:38.188145    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:38.224261    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:38.224272    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:38.239433    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:38.239445    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:38.256476    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:38.256487    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:40.257154    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:40.762594    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:45.259981    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:45.260136    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:45.274845    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:45.274938    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:45.286890    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:45.286964    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:45.298593    8893 logs.go:276] 2 containers: [9c5c743915c1 2f543a3a77a1]
	I0920 10:57:45.298673    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:45.309668    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:45.309739    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:45.320121    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:45.320198    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:45.330546    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:45.330633    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:45.340701    8893 logs.go:276] 0 containers: []
	W0920 10:57:45.340713    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:45.340779    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:45.351138    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:45.351153    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:45.351159    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:45.362918    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:45.362929    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:45.381516    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:45.381531    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:45.393284    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:45.393299    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:45.429714    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:45.429724    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:45.465582    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:45.465593    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:45.488264    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:45.488274    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:45.502125    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:45.502135    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:45.514506    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:45.514516    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:45.519024    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:45.519030    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:45.530657    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:45.530671    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:45.546469    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:45.546480    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:45.571463    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:45.571473    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:45.764976    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:45.765091    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:45.775735    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:45.775824    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:45.786545    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:45.786631    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:45.797462    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:45.797542    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:45.808808    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:45.808888    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:45.820114    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:45.820198    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:45.831561    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:45.831648    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:45.842245    9036 logs.go:276] 0 containers: []
	W0920 10:57:45.842257    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:45.842324    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:45.854338    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:45.854357    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:45.854362    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:45.894645    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:45.894666    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:45.913218    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:45.913233    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:45.929805    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:45.929821    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:45.954160    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:45.954172    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:45.958679    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:45.958686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:45.997779    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:45.997790    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:46.012209    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:46.012223    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:46.026400    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:46.026411    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:46.038172    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:46.038183    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:46.058464    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:46.058476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:46.070998    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:46.071008    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:46.082781    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:46.082794    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:46.100352    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:46.100366    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:46.112528    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:46.112540    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:46.124866    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:46.124877    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:46.163869    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:46.163884    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:48.676400    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:48.084702    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:53.678747    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:53.678941    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:53.693238    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:53.693335    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:53.704965    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:53.705057    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:53.715623    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:53.715709    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:53.726670    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:53.726755    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:53.737407    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:53.737475    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:53.748254    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:53.748332    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:53.759975    9036 logs.go:276] 0 containers: []
	W0920 10:57:53.759987    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:53.760056    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:53.770821    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:53.770837    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:53.770842    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:53.783485    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:53.783496    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:53.798861    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:53.798873    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:53.836977    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:53.836990    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:53.848041    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:53.848053    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:53.859565    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:53.859578    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:53.871353    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:53.871364    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:53.883020    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:53.883030    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:53.905217    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:53.905224    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:53.941699    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:53.941710    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:53.956643    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:53.956651    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:53.973622    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:53.973640    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:53.994308    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:53.994319    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:54.006332    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:54.006347    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:54.011180    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:54.011188    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:54.045853    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:54.045867    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:54.063699    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:54.063709    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:53.087064    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:53.087403    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:53.116099    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:57:53.116241    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:53.134672    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:57:53.134771    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:53.147925    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:57:53.148016    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:53.159009    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:57:53.159093    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:53.169603    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:57:53.169682    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:53.181484    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:57:53.181564    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:53.192054    8893 logs.go:276] 0 containers: []
	W0920 10:57:53.192067    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:53.192140    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:53.205877    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:57:53.205898    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:57:53.205904    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:57:53.218166    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:53.218179    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:53.242376    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:53.242384    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:53.247226    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:57:53.247233    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:57:53.265802    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:53.265813    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:53.302300    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:57:53.302312    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:57:53.316022    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:57:53.316035    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:57:53.328232    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:57:53.328243    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:57:53.345567    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:57:53.345581    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:53.357230    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:53.357243    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:53.391823    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:57:53.391836    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:57:53.404076    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:57:53.404090    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:57:53.415233    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:57:53.415246    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:57:53.433601    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:57:53.433611    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:57:53.447422    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:57:53.447432    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:57:55.960163    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:56.588165    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:00.962525    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:00.962812    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:00.983455    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:00.983576    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:00.998705    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:00.998795    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:01.012545    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:01.012631    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:01.024124    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:01.024212    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:01.034906    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:01.034990    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:01.045308    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:01.045389    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:01.055092    8893 logs.go:276] 0 containers: []
	W0920 10:58:01.055108    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:01.055180    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:01.066167    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:01.066188    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:01.066194    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:01.080647    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:01.080657    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:01.102774    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:01.102785    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:01.114926    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:01.114938    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:01.132343    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:01.132353    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:01.136799    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:01.136809    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:01.176008    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:01.176024    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:01.188442    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:01.188453    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:01.212902    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:01.212910    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:01.237856    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:01.237872    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:01.250880    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:01.250892    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:01.265334    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:01.265345    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:01.277758    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:01.277767    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:01.311798    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:01.311814    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:01.323086    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:01.323097    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:01.590407    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:01.590598    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:01.605226    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:01.605312    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:01.616791    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:01.616873    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:01.627409    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:01.627484    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:01.638823    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:01.638906    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:01.649465    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:01.649542    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:01.660399    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:01.660471    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:01.671258    9036 logs.go:276] 0 containers: []
	W0920 10:58:01.671277    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:01.671353    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:01.682094    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:01.682112    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:01.682119    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:01.693998    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:01.694010    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:01.715341    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:01.715351    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:01.729553    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:01.729566    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:01.741343    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:01.741354    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:01.759464    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:01.759476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:01.773117    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:01.773130    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:01.797161    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:01.797167    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:01.832316    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:01.832331    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:01.844180    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:01.844192    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:01.856989    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:01.857000    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:01.872621    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:01.872635    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:01.877186    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:01.877194    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:01.914604    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:01.914615    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:01.929280    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:01.929291    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:01.941421    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:01.941433    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:01.952243    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:01.952255    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:04.490905    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:03.836664    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:09.493101    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:09.493278    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:09.504400    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:09.504492    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:09.514846    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:09.514935    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:09.529515    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:09.529596    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:09.540162    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:09.540249    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:09.553661    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:09.553741    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:09.566060    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:09.566146    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:09.576987    9036 logs.go:276] 0 containers: []
	W0920 10:58:09.577000    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:09.577077    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:09.587253    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:09.587271    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:09.587276    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:09.602175    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:09.602185    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:09.619296    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:09.619307    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:09.633496    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:09.633509    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:09.646476    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:09.646488    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:09.660273    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:09.660283    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:09.694685    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:09.694701    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:09.733712    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:09.733723    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:09.745914    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:09.745926    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:09.757459    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:09.757471    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:09.779823    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:09.779832    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:09.793193    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:09.793204    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:09.812325    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:09.812334    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:09.824331    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:09.824341    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:09.836738    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:09.836749    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:09.850519    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:09.850529    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:09.887994    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:09.888002    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:08.839091    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:08.839624    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:08.875654    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:08.875808    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:08.895908    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:08.896027    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:08.910524    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:08.910628    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:08.922949    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:08.923032    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:08.934239    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:08.934325    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:08.945088    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:08.945176    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:08.955672    8893 logs.go:276] 0 containers: []
	W0920 10:58:08.955687    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:08.955756    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:08.966243    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:08.966262    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:08.966267    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:08.991123    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:08.991131    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:09.005805    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:09.005820    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:09.018406    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:09.018417    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:09.031047    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:09.031057    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:09.043041    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:09.043053    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:09.058956    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:09.058973    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:09.071335    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:09.071345    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:09.108070    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:09.108080    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:09.122816    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:09.122828    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:09.139189    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:09.139211    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:09.151430    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:09.151441    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:09.169162    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:09.169173    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:09.173628    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:09.173635    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:09.185269    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:09.185280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:11.718017    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:12.394043    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:16.720292    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:16.720554    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:16.739039    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:16.739152    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:16.752510    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:16.752594    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:16.764682    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:16.764749    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:16.774623    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:16.774688    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:16.785849    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:16.785930    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:16.796576    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:16.796650    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:16.806940    8893 logs.go:276] 0 containers: []
	W0920 10:58:16.806955    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:16.807016    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:16.817490    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:16.817507    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:16.817512    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:16.852416    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:16.852430    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:16.864006    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:16.864017    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:16.876319    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:16.876334    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:16.892085    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:16.892096    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:16.903741    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:16.903752    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:16.938302    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:16.938313    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:16.958341    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:16.958367    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:16.969515    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:16.969530    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:16.981550    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:16.981560    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:16.993379    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:16.993390    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:16.997885    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:16.997891    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:17.012046    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:17.012056    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:17.029003    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:17.029016    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:17.053783    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:17.053792    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:17.396208    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:17.396317    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:17.406550    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:17.406634    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:17.417018    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:17.417093    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:17.427827    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:17.427907    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:17.443644    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:17.443730    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:17.455945    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:17.456015    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:17.466710    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:17.466789    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:17.477624    9036 logs.go:276] 0 containers: []
	W0920 10:58:17.477634    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:17.477699    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:17.488165    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:17.488184    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:17.488189    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:17.527452    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:17.527463    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:17.541324    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:17.541340    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:17.552990    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:17.553002    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:17.563815    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:17.563827    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:17.601669    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:17.601686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:17.606024    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:17.606032    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:17.620581    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:17.620592    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:17.637070    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:17.637087    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:17.649176    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:17.649191    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:17.662294    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:17.662305    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:17.697742    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:17.697752    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:17.712587    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:17.712599    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:17.723625    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:17.723639    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:17.735443    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:17.735453    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:17.754041    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:17.754051    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:17.767043    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:17.767056    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:20.292788    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:19.565526    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:25.295476    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:25.295678    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:25.310149    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:25.310230    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:25.320863    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:25.320929    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:25.331231    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:25.331315    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:25.342360    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:25.342440    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:25.364135    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:25.364278    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:25.377380    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:25.377453    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:25.387518    9036 logs.go:276] 0 containers: []
	W0920 10:58:25.387530    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:25.387598    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:25.398043    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:25.398060    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:25.398065    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:25.416029    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:25.416040    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:25.429852    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:25.429866    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:25.441588    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:25.441599    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:25.479124    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:25.479137    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:25.490384    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:25.490395    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:25.508039    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:25.508049    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:25.545152    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:25.545163    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:25.559325    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:25.559334    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:25.580722    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:25.580738    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:25.595763    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:25.595772    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:25.607378    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:25.607388    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:25.630164    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:25.630172    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:25.634731    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:25.634739    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:24.567725    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:24.567861    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:24.579445    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:24.579546    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:24.594221    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:24.594297    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:24.604697    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:24.604783    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:24.615486    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:24.615571    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:24.626216    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:24.626302    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:24.637141    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:24.637224    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:24.647310    8893 logs.go:276] 0 containers: []
	W0920 10:58:24.647320    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:24.647391    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:24.657738    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:24.657756    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:24.657762    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:24.669841    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:24.669851    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:24.704968    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:24.704979    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:24.725137    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:24.725147    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:24.741133    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:24.741148    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:24.752948    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:24.752961    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:24.776808    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:24.776819    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:24.781260    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:24.781270    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:24.796816    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:24.796830    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:24.814243    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:24.814255    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:24.825420    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:24.825429    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:24.837195    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:24.837208    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:24.848342    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:24.848354    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:24.859942    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:24.859952    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:24.894395    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:24.894406    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:27.410497    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:25.668916    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:25.668925    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:25.681002    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:25.681016    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:25.693000    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:25.693009    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:28.206970    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:32.412823    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:32.413060    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:32.428348    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:32.428454    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:32.441486    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:32.441581    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:32.452003    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:32.452083    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:32.462993    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:32.463076    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:32.474004    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:32.474087    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:32.485073    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:32.485147    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:32.495653    8893 logs.go:276] 0 containers: []
	W0920 10:58:32.495675    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:32.495749    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:32.505982    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:32.506000    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:32.506005    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:32.517691    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:32.517705    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:32.543210    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:32.543219    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:32.548039    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:32.548048    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:32.559693    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:32.559703    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:32.577257    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:32.577269    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:32.611913    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:32.611920    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:32.623972    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:32.623982    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:32.641241    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:32.641251    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:32.656562    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:32.656572    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:32.675055    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:32.675065    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:32.695300    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:32.695310    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:32.706955    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:32.706965    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:32.718767    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:32.718778    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:32.756669    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:32.756680    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:33.209279    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:33.209472    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:33.226447    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:33.226548    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:33.239251    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:33.239340    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:33.250374    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:33.250452    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:33.260984    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:33.261073    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:33.271368    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:33.271452    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:33.281988    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:33.282068    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:33.291691    9036 logs.go:276] 0 containers: []
	W0920 10:58:33.291703    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:33.291777    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:33.302288    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:33.302305    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:33.302311    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:33.314201    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:33.314212    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:33.326713    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:33.326726    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:33.331498    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:33.331505    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:33.343024    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:33.343036    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:33.356864    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:33.356873    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:33.368418    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:33.368429    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:33.381977    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:33.381987    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:33.430410    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:33.430422    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:33.488503    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:33.488517    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:33.511685    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:33.511696    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:33.550100    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:33.550109    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:33.573301    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:33.573315    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:33.588047    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:33.588057    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:33.599331    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:33.599341    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:33.610161    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:33.610171    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:33.624293    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:33.624304    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:35.280016    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:36.143274    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:40.282450    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:40.282951    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:40.323156    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:40.323321    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:40.346220    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:40.346312    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:40.359121    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:40.359199    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:40.377181    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:40.377262    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:40.388182    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:40.388274    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:40.403309    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:40.403401    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:40.413746    8893 logs.go:276] 0 containers: []
	W0920 10:58:40.413758    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:40.413838    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:40.428781    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:40.428803    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:40.428809    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:40.461981    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:40.461995    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:40.483522    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:40.483537    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:40.496038    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:40.496049    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:40.520861    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:40.520869    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:40.555458    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:40.555472    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:40.570039    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:40.570051    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:40.581926    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:40.581937    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:40.593274    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:40.593285    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:40.597713    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:40.597721    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:40.608995    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:40.609003    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:40.621034    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:40.621045    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:40.636771    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:40.636781    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:40.654841    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:40.654853    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:40.666486    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:40.666497    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:41.145543    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:41.145757    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:41.163262    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:41.163350    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:41.176699    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:41.176790    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:41.188362    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:41.188439    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:41.198791    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:41.198875    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:41.209357    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:41.209444    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:41.219599    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:41.219682    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:41.230061    9036 logs.go:276] 0 containers: []
	W0920 10:58:41.230072    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:41.230137    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:41.240622    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:41.240642    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:41.240647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:41.255854    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:41.255864    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:41.267262    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:41.267272    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:41.304263    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:41.304272    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:41.320099    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:41.320110    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:41.357702    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:41.357715    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:41.373579    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:41.373589    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:41.395071    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:41.395082    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:41.406048    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:41.406060    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:41.427949    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:41.427957    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:41.442591    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:41.442602    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:41.460021    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:41.460034    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:41.482353    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:41.482365    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:41.502750    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:41.502759    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:41.507301    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:41.507308    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:41.543895    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:41.543907    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:41.557530    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:41.557543    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:44.072901    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:43.180412    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:49.075243    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:49.075323    9036 kubeadm.go:597] duration metric: took 4m4.124215667s to restartPrimaryControlPlane
	W0920 10:58:49.075377    9036 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:58:49.075404    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:58:50.084958    9036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.009548959s)
	I0920 10:58:50.085035    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:58:50.090019    9036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:58:50.093415    9036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:58:50.096068    9036 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:58:50.096073    9036 kubeadm.go:157] found existing configuration files:
	
	I0920 10:58:50.096097    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf
	I0920 10:58:50.098503    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:58:50.098527    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:58:50.101734    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf
	I0920 10:58:50.104665    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:58:50.104692    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:58:50.107195    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf
	I0920 10:58:50.110164    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:58:50.110188    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:58:50.113130    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf
	I0920 10:58:50.115740    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:58:50.115765    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:58:50.118457    9036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:58:50.135588    9036 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:58:50.135716    9036 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:58:50.195147    9036 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:58:50.195198    9036 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:58:50.195247    9036 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:58:50.243493    9036 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:58:50.246813    9036 out.go:235]   - Generating certificates and keys ...
	I0920 10:58:50.246846    9036 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:58:50.246875    9036 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:58:50.246909    9036 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:58:50.246937    9036 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:58:50.246968    9036 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:58:50.246992    9036 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:58:50.247020    9036 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:58:50.247691    9036 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:58:50.247724    9036 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:58:50.247758    9036 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:58:50.247794    9036 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:58:50.247851    9036 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:58:50.318900    9036 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:58:50.405682    9036 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:58:50.445622    9036 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:58:50.480605    9036 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:58:50.510322    9036 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:58:50.510855    9036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:58:50.510946    9036 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:58:50.603392    9036 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:58:50.611402    9036 out.go:235]   - Booting up control plane ...
	I0920 10:58:50.611457    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:58:50.611500    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:58:50.611534    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:58:50.611609    9036 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:58:50.611697    9036 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:58:48.181309    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:48.181480    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:48.196116    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:48.196223    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:48.207620    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:48.207699    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:48.218640    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:48.218733    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:48.228936    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:48.229014    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:48.239544    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:48.239625    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:48.249786    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:48.249871    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:48.261442    8893 logs.go:276] 0 containers: []
	W0920 10:58:48.261452    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:48.261520    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:48.271572    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:48.271591    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:48.271596    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:48.286163    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:48.286173    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:48.299221    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:48.299234    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:48.311019    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:48.311034    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:48.322644    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:48.322656    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:48.335150    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:48.335161    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:48.354255    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:48.354264    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:48.379248    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:48.379258    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:48.384088    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:48.384095    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:48.398377    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:48.398387    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:48.410041    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:48.410055    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:48.421637    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:48.421647    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:48.433524    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:48.433538    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:48.466045    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:48.466054    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:48.504573    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:48.504585    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:51.025228    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:54.609471    9036 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001636 seconds
	I0920 10:58:54.609620    9036 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:58:54.613039    9036 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:58:55.132546    9036 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:58:55.132781    9036 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-423000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:58:55.636067    9036 kubeadm.go:310] [bootstrap-token] Using token: avlvxy.orzbh4xyhzrp3iig
	I0920 10:58:55.639145    9036 out.go:235]   - Configuring RBAC rules ...
	I0920 10:58:55.639207    9036 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:58:55.639255    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:58:55.644547    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:58:55.649609    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:58:55.650388    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:58:55.651223    9036 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:58:55.654566    9036 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:58:55.793148    9036 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:58:56.040236    9036 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:58:56.041110    9036 kubeadm.go:310] 
	I0920 10:58:56.041146    9036 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:58:56.041149    9036 kubeadm.go:310] 
	I0920 10:58:56.041196    9036 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:58:56.041201    9036 kubeadm.go:310] 
	I0920 10:58:56.041211    9036 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:58:56.041249    9036 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:58:56.041338    9036 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:58:56.041344    9036 kubeadm.go:310] 
	I0920 10:58:56.041390    9036 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:58:56.041395    9036 kubeadm.go:310] 
	I0920 10:58:56.041441    9036 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:58:56.041446    9036 kubeadm.go:310] 
	I0920 10:58:56.041494    9036 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:58:56.041536    9036 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:58:56.041587    9036 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:58:56.041594    9036 kubeadm.go:310] 
	I0920 10:58:56.041656    9036 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:58:56.041708    9036 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:58:56.041714    9036 kubeadm.go:310] 
	I0920 10:58:56.041812    9036 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token avlvxy.orzbh4xyhzrp3iig \
	I0920 10:58:56.041884    9036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa \
	I0920 10:58:56.041898    9036 kubeadm.go:310] 	--control-plane 
	I0920 10:58:56.041933    9036 kubeadm.go:310] 
	I0920 10:58:56.041987    9036 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:58:56.041995    9036 kubeadm.go:310] 
	I0920 10:58:56.042064    9036 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token avlvxy.orzbh4xyhzrp3iig \
	I0920 10:58:56.042150    9036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa 
	I0920 10:58:56.042319    9036 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:58:56.042332    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:58:56.042340    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:56.046141    9036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:58:56.054097    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:58:56.057252    9036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:58:56.064233    9036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:58:56.064344    9036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:58:56.064368    9036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-423000 minikube.k8s.io/updated_at=2024_09_20T10_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=stopped-upgrade-423000 minikube.k8s.io/primary=true
	I0920 10:58:56.108627    9036 ops.go:34] apiserver oom_adj: -16
	I0920 10:58:56.108645    9036 kubeadm.go:1113] duration metric: took 44.371458ms to wait for elevateKubeSystemPrivileges
	I0920 10:58:56.108654    9036 kubeadm.go:394] duration metric: took 4m11.17100575s to StartCluster
	I0920 10:58:56.108664    9036 settings.go:142] acquiring lock: {Name:mk5f352888690de611711a90a16fd3b08e6afbf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:56.108761    9036 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:58:56.109169    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:56.109393    9036 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:56.109420    9036 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:58:56.109516    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:58:56.109529    9036 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-423000"
	I0920 10:58:56.109548    9036 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-423000"
	W0920 10:58:56.109559    9036 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:58:56.109565    9036 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-423000"
	I0920 10:58:56.109598    9036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-423000"
	I0920 10:58:56.109625    9036 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0920 10:58:56.110522    9036 retry.go:31] will retry after 975.687116ms: connect: dial unix /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/monitor: connect: connection refused
	I0920 10:58:56.115127    9036 out.go:177] * Verifying Kubernetes components...
	I0920 10:58:56.121123    9036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:58:56.026327    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:56.026443    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:56.038331    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:58:56.038417    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:56.049461    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:58:56.049540    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:56.061306    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:58:56.061396    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:56.077817    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:58:56.077906    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:56.089530    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:58:56.089614    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:56.102985    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:58:56.103068    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:56.115205    8893 logs.go:276] 0 containers: []
	W0920 10:58:56.115216    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:56.115283    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:56.126535    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:58:56.126552    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:56.126558    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:56.162931    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:58:56.162945    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:58:56.176210    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:58:56.176224    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:58:56.188854    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:58:56.188869    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:58:56.204840    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:58:56.204852    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:56.217470    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:56.217479    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:56.254203    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:58:56.254229    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:58:56.270129    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:58:56.270142    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:58:56.282393    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:58:56.282406    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:58:56.294700    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:58:56.294713    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:58:56.307274    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:58:56.307286    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:58:56.326921    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:56.326937    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:56.355021    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:56.355038    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:56.360469    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:58:56.360482    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:58:56.375892    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:58:56.375903    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:58:56.124196    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:58:56.127169    9036 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:58:56.127178    9036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:58:56.127186    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:58:56.216688    9036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:58:56.223802    9036 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:58:56.223868    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:58:56.229304    9036 api_server.go:72] duration metric: took 119.896458ms to wait for apiserver process to appear ...
	I0920 10:58:56.229314    9036 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:58:56.229323    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:56.238827    9036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:58:57.089132    9036 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:58:57.089264    9036 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-423000"
	W0920 10:58:57.089273    9036 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:58:57.089286    9036 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0920 10:58:57.089905    9036 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:58:57.089911    9036 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:58:57.089917    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:58:57.120145    9036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:58:57.185558    9036 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:58:57.185573    9036 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:58:58.888549    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:01.231375    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:01.231419    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:03.890735    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:03.890929    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:03.902635    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:03.902732    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:03.913879    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:03.913961    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:03.924651    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:03.924731    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:03.936427    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:03.936501    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:03.947400    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:03.947487    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:03.958168    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:03.958248    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:03.973240    8893 logs.go:276] 0 containers: []
	W0920 10:59:03.973251    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:03.973314    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:03.983923    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:03.983941    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:03.983946    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:03.995507    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:03.995520    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:04.000259    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:04.000266    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:04.014168    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:04.014183    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:04.029210    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:04.029223    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:04.051843    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:04.051851    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:04.087030    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:04.087042    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:04.102206    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:04.102219    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:04.117364    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:04.117377    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:04.129198    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:04.129212    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:04.145269    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:04.145282    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:04.179423    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:04.179432    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:04.191431    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:04.191444    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:04.203209    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:04.203223    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:04.215014    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:04.215028    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:06.734673    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:06.231695    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:06.231721    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:11.736960    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:11.737077    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:11.749993    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:11.750081    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:11.760720    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:11.760805    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:11.771453    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:11.771540    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:11.782381    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:11.782466    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:11.792851    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:11.792938    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:11.803481    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:11.803561    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:11.815916    8893 logs.go:276] 0 containers: []
	W0920 10:59:11.815932    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:11.816009    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:11.828232    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:11.828251    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:11.828257    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:11.865280    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:11.865292    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:11.878265    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:11.878277    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:11.891806    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:11.891819    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:11.906000    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:11.906012    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:11.940511    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:11.940522    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:11.952369    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:11.952382    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:11.967723    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:11.967737    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:11.986133    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:11.986144    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:11.998282    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:11.998302    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:12.014606    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:12.014624    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:12.027606    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:12.027619    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:12.032539    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:12.032552    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:12.045348    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:12.045360    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:12.064172    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:12.064182    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:11.232036    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:11.232063    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:14.589911    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:16.232508    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:16.232549    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:19.592267    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:19.592673    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:19.638428    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:19.638541    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:19.653292    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:19.653392    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:19.665755    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:19.665845    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:19.676433    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:19.676508    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:19.687161    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:19.687244    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:19.697790    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:19.697875    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:19.707731    8893 logs.go:276] 0 containers: []
	W0920 10:59:19.707747    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:19.707813    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:19.718477    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:19.718498    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:19.718503    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:19.730961    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:19.730975    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:19.766798    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:19.766808    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:19.784882    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:19.784893    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:19.797275    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:19.797291    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:19.810485    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:19.810498    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:19.834427    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:19.834435    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:19.869440    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:19.869451    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:19.886724    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:19.886735    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:19.898529    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:19.898538    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:19.913691    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:19.913702    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:19.925757    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:19.925771    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:19.943270    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:19.943280    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:19.948484    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:19.948494    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:19.969233    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:19.969243    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:22.483858    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:21.233267    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:21.233316    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:26.234235    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:26.234276    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:59:27.185746    9036 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:59:27.189978    9036 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:59:27.486078    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:27.486363    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:27.514269    8893 logs.go:276] 1 containers: [b2ffcce40af8]
	I0920 10:59:27.514415    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:27.532365    8893 logs.go:276] 1 containers: [5aec1155d099]
	I0920 10:59:27.532451    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:27.546135    8893 logs.go:276] 4 containers: [b20591927849 874ed017dcef 9c5c743915c1 2f543a3a77a1]
	I0920 10:59:27.546212    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:27.557271    8893 logs.go:276] 1 containers: [40946a601801]
	I0920 10:59:27.557344    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:27.568277    8893 logs.go:276] 1 containers: [e93ec3297bcf]
	I0920 10:59:27.568362    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:27.579181    8893 logs.go:276] 1 containers: [9645e45c5a4f]
	I0920 10:59:27.579268    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:27.590114    8893 logs.go:276] 0 containers: []
	W0920 10:59:27.590130    8893 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:27.590204    8893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:27.600877    8893 logs.go:276] 1 containers: [563be55f69cf]
	I0920 10:59:27.600898    8893 logs.go:123] Gathering logs for etcd [5aec1155d099] ...
	I0920 10:59:27.600904    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aec1155d099"
	I0920 10:59:27.617320    8893 logs.go:123] Gathering logs for coredns [b20591927849] ...
	I0920 10:59:27.617330    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b20591927849"
	I0920 10:59:27.629080    8893 logs.go:123] Gathering logs for coredns [9c5c743915c1] ...
	I0920 10:59:27.629094    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5c743915c1"
	I0920 10:59:27.645250    8893 logs.go:123] Gathering logs for coredns [2f543a3a77a1] ...
	I0920 10:59:27.645260    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f543a3a77a1"
	I0920 10:59:27.657357    8893 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:27.657372    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:27.681650    8893 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:27.681657    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:27.714691    8893 logs.go:123] Gathering logs for coredns [874ed017dcef] ...
	I0920 10:59:27.714697    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 874ed017dcef"
	I0920 10:59:27.725943    8893 logs.go:123] Gathering logs for container status ...
	I0920 10:59:27.725954    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:27.738051    8893 logs.go:123] Gathering logs for kube-apiserver [b2ffcce40af8] ...
	I0920 10:59:27.738062    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ffcce40af8"
	I0920 10:59:27.751877    8893 logs.go:123] Gathering logs for kube-proxy [e93ec3297bcf] ...
	I0920 10:59:27.751893    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e93ec3297bcf"
	I0920 10:59:27.763985    8893 logs.go:123] Gathering logs for kube-controller-manager [9645e45c5a4f] ...
	I0920 10:59:27.763997    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9645e45c5a4f"
	I0920 10:59:27.781924    8893 logs.go:123] Gathering logs for kube-scheduler [40946a601801] ...
	I0920 10:59:27.781940    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40946a601801"
	I0920 10:59:27.796962    8893 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:27.796971    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:27.832378    8893 logs.go:123] Gathering logs for storage-provisioner [563be55f69cf] ...
	I0920 10:59:27.832389    8893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be55f69cf"
	I0920 10:59:27.199909    9036 addons.go:510] duration metric: took 31.090657917s for enable addons: enabled=[storage-provisioner]
	I0920 10:59:27.848133    8893 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:27.848144    8893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:30.354957    8893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:35.357287    8893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:35.362994    8893 out.go:201] 
	W0920 10:59:35.366897    8893 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:59:35.366913    8893 out.go:270] * 
	W0920 10:59:35.368040    8893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:59:35.382928    8893 out.go:201] 
	I0920 10:59:31.235314    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:31.235355    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:36.236759    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:36.236822    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:41.238545    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:41.238598    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:46.239430    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:46.239472    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-20 17:50:29 UTC, ends at Fri 2024-09-20 17:59:51 UTC. --
	Sep 20 17:59:35 running-upgrade-568000 dockerd[3354]: time="2024-09-20T17:59:35.831648633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:59:35 running-upgrade-568000 dockerd[3354]: time="2024-09-20T17:59:35.831677882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:59:35 running-upgrade-568000 dockerd[3354]: time="2024-09-20T17:59:35.831693423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:59:35 running-upgrade-568000 dockerd[3354]: time="2024-09-20T17:59:35.831740963Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/be32a741a7b5a34a0ede64dc7edc79e16dd9fe372380c3e7e2d88340a8d68aa4 pid=18977 runtime=io.containerd.runc.v2
	Sep 20 17:59:36 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:36Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:59:36 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:36Z" level=error msg="ContainerStats resp: {0x400083dac0 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000919740 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000919b00 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000919ec0 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000420800 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000421300 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x4000421d40 linux}"
	Sep 20 17:59:37 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:37Z" level=error msg="ContainerStats resp: {0x40007d2740 linux}"
	Sep 20 17:59:41 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:59:46 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:59:47 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:47Z" level=error msg="ContainerStats resp: {0x40006a1500 linux}"
	Sep 20 17:59:47 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:47Z" level=error msg="ContainerStats resp: {0x40006a1640 linux}"
	Sep 20 17:59:48 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:48Z" level=error msg="ContainerStats resp: {0x40007d3400 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x400082c380 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x40003d3400 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x40003d3780 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x40003d3bc0 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x400082d3c0 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x40009ea1c0 linux}"
	Sep 20 17:59:49 running-upgrade-568000 cri-dockerd[3194]: time="2024-09-20T17:59:49Z" level=error msg="ContainerStats resp: {0x40009ea600 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	be32a741a7b5a       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   20c201cd2bf69
	ca07487a66c03       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   3fcfcec6a8e29
	b205919278499       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   20c201cd2bf69
	874ed017dcef2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3fcfcec6a8e29
	e93ec3297bcf8       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   679d495f8f00d
	563be55f69cfc       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   223d772b86f1c
	40946a6018017       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8e10aab844a9e
	5aec1155d0998       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   41c2cca1c5ce1
	9645e45c5a4f5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   7ef28c14b5199
	b2ffcce40af81       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   727f4f965da63
	
	
	==> coredns [874ed017dcef] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:51562->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:41977->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:33477->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:58052->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:43948->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:52073->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:40734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:48879->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7202645822658681623.875410555277758222. HINFO: read udp 10.244.0.3:39717->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b20591927849] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:50000->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:56218->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:34418->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:60852->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:45181->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:38818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:33706->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:36344->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:54061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7733347695001685885.2658096302896292042. HINFO: read udp 10.244.0.2:45480->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [be32a741a7b5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2952839789473185110.752686393709672163. HINFO: read udp 10.244.0.2:46282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2952839789473185110.752686393709672163. HINFO: read udp 10.244.0.2:38420->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2952839789473185110.752686393709672163. HINFO: read udp 10.244.0.2:59693->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ca07487a66c0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8302341936492343255.7292625065752976115. HINFO: read udp 10.244.0.3:46082->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8302341936492343255.7292625065752976115. HINFO: read udp 10.244.0.3:42066->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8302341936492343255.7292625065752976115. HINFO: read udp 10.244.0.3:44308->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8302341936492343255.7292625065752976115. HINFO: read udp 10.244.0.3:43784->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-568000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-568000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=running-upgrade-568000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T10_55_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:55:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-568000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:59:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:55:34 +0000   Fri, 20 Sep 2024 17:55:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:55:34 +0000   Fri, 20 Sep 2024 17:55:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:55:34 +0000   Fri, 20 Sep 2024 17:55:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:55:34 +0000   Fri, 20 Sep 2024 17:55:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-568000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3585c8ced26e4f09aea3d45a80a4706c
	  System UUID:                3585c8ced26e4f09aea3d45a80a4706c
	  Boot ID:                    b5266264-cbd4-4d96-8363-744d23f8031b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6ww22                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-b92jv                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-568000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-568000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-568000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-26pzc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-568000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-568000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-568000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-568000 event: Registered Node running-upgrade-568000 in Controller
	
	
	==> dmesg <==
	[  +1.483717] systemd-fstab-generator[830]: Ignoring "noauto" for root device
	[  +0.075961] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +0.075995] systemd-fstab-generator[852]: Ignoring "noauto" for root device
	[  +1.151553] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085393] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.086296] systemd-fstab-generator[1013]: Ignoring "noauto" for root device
	[  +2.505704] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.659885] systemd-fstab-generator[1943]: Ignoring "noauto" for root device
	[  +2.610415] systemd-fstab-generator[2220]: Ignoring "noauto" for root device
	[  +0.140887] systemd-fstab-generator[2254]: Ignoring "noauto" for root device
	[  +0.086970] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +0.097504] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[Sep20 17:51] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.210186] systemd-fstab-generator[3151]: Ignoring "noauto" for root device
	[  +0.080952] systemd-fstab-generator[3162]: Ignoring "noauto" for root device
	[  +0.075909] systemd-fstab-generator[3173]: Ignoring "noauto" for root device
	[  +0.079841] systemd-fstab-generator[3187]: Ignoring "noauto" for root device
	[  +2.499446] systemd-fstab-generator[3341]: Ignoring "noauto" for root device
	[  +3.456552] systemd-fstab-generator[3732]: Ignoring "noauto" for root device
	[  +1.233171] systemd-fstab-generator[4034]: Ignoring "noauto" for root device
	[ +18.602429] kauditd_printk_skb: 68 callbacks suppressed
	[Sep20 17:55] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.372224] systemd-fstab-generator[12017]: Ignoring "noauto" for root device
	[  +5.635413] systemd-fstab-generator[12607]: Ignoring "noauto" for root device
	[  +0.460263] systemd-fstab-generator[12741]: Ignoring "noauto" for root device
	
	
	==> etcd [5aec1155d099] <==
	{"level":"info","ts":"2024-09-20T17:55:29.854Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-20T17:55:29.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-20T17:55:29.855Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-20T17:55:29.851Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:55:29.855Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:55:29.855Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:55:29.855Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:30.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-568000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:55:30.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:55:30.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:55:30.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-20T17:55:30.526Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:30.527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:30.527Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 17:59:51 up 9 min,  0 users,  load average: 0.08, 0.29, 0.20
	Linux running-upgrade-568000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b2ffcce40af8] <==
	I0920 17:55:31.735689       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0920 17:55:31.769002       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0920 17:55:31.769034       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0920 17:55:31.769785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:55:31.782723       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:55:31.783753       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0920 17:55:31.803556       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0920 17:55:32.504943       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 17:55:32.672219       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0920 17:55:32.674600       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0920 17:55:32.674619       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 17:55:32.835627       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:55:32.845898       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 17:55:32.937751       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0920 17:55:32.939744       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0920 17:55:32.940115       1 controller.go:611] quota admission added evaluator for: endpoints
	I0920 17:55:32.941303       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:55:33.813977       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0920 17:55:34.233658       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0920 17:55:34.237219       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0920 17:55:34.248330       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0920 17:55:34.290417       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 17:55:47.416878       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0920 17:55:47.567561       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:55:48.117042       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [9645e45c5a4f] <==
	I0920 17:55:46.663557       1 shared_informer.go:262] Caches are synced for daemon sets
	I0920 17:55:46.664404       1 shared_informer.go:262] Caches are synced for taint
	I0920 17:55:46.664448       1 shared_informer.go:262] Caches are synced for HPA
	I0920 17:55:46.664479       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0920 17:55:46.664606       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0920 17:55:46.664922       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-568000. Assuming now as a timestamp.
	I0920 17:55:46.664960       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0920 17:55:46.665094       1 event.go:294] "Event occurred" object="running-upgrade-568000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-568000 event: Registered Node running-upgrade-568000 in Controller"
	I0920 17:55:46.665407       1 shared_informer.go:262] Caches are synced for PV protection
	I0920 17:55:46.665411       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0920 17:55:46.665414       1 shared_informer.go:262] Caches are synced for ephemeral
	I0920 17:55:46.673074       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0920 17:55:46.814445       1 shared_informer.go:262] Caches are synced for job
	I0920 17:55:46.823414       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:55:46.839438       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0920 17:55:46.840530       1 shared_informer.go:262] Caches are synced for attach detach
	I0920 17:55:46.857601       1 shared_informer.go:262] Caches are synced for cronjob
	I0920 17:55:46.867827       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:55:47.281654       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:55:47.316661       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:55:47.316700       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 17:55:47.418139       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0920 17:55:47.569955       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-26pzc"
	I0920 17:55:47.669240       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6ww22"
	I0920 17:55:47.682961       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-b92jv"
	
	
	==> kube-proxy [e93ec3297bcf] <==
	I0920 17:55:48.093929       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0920 17:55:48.093966       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0920 17:55:48.093976       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0920 17:55:48.114631       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0920 17:55:48.114641       1 server_others.go:206] "Using iptables Proxier"
	I0920 17:55:48.114656       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0920 17:55:48.115007       1 server.go:661] "Version info" version="v1.24.1"
	I0920 17:55:48.115011       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:55:48.115253       1 config.go:317] "Starting service config controller"
	I0920 17:55:48.115261       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0920 17:55:48.115269       1 config.go:226] "Starting endpoint slice config controller"
	I0920 17:55:48.115271       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0920 17:55:48.115494       1 config.go:444] "Starting node config controller"
	I0920 17:55:48.115496       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0920 17:55:48.215435       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0920 17:55:48.215462       1 shared_informer.go:262] Caches are synced for service config
	I0920 17:55:48.215595       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [40946a601801] <==
	W0920 17:55:31.731816       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:55:31.731835       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0920 17:55:31.731884       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:55:31.731901       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 17:55:31.731930       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:55:31.731967       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0920 17:55:31.732008       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:55:31.732025       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 17:55:32.591038       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:55:32.591173       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0920 17:55:32.608912       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:55:32.608950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0920 17:55:32.616873       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:55:32.616892       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 17:55:32.619505       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:55:32.619574       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0920 17:55:32.641622       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:55:32.641711       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0920 17:55:32.657072       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:55:32.657130       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 17:55:32.668882       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:55:32.669020       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0920 17:55:32.755372       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:55:32.755450       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0920 17:55:34.829374       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-20 17:50:29 UTC, ends at Fri 2024-09-20 17:59:51 UTC. --
	Sep 20 17:55:36 running-upgrade-568000 kubelet[12613]: E0920 17:55:36.067456   12613 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-568000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-568000"
	Sep 20 17:55:36 running-upgrade-568000 kubelet[12613]: E0920 17:55:36.268009   12613 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-568000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-568000"
	Sep 20 17:55:36 running-upgrade-568000 kubelet[12613]: I0920 17:55:36.465568   12613 request.go:601] Waited for 1.118525643s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 20 17:55:36 running-upgrade-568000 kubelet[12613]: E0920 17:55:36.468912   12613 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-568000\" already exists" pod="kube-system/etcd-running-upgrade-568000"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: I0920 17:55:46.669233   12613 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: I0920 17:55:46.695774   12613 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: I0920 17:55:46.695781   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwrz\" (UniqueName: \"kubernetes.io/projected/a5039dac-9ddf-4de7-8ccb-7f500af4c0da-kube-api-access-ckwrz\") pod \"storage-provisioner\" (UID: \"a5039dac-9ddf-4de7-8ccb-7f500af4c0da\") " pod="kube-system/storage-provisioner"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: I0920 17:55:46.695840   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5039dac-9ddf-4de7-8ccb-7f500af4c0da-tmp\") pod \"storage-provisioner\" (UID: \"a5039dac-9ddf-4de7-8ccb-7f500af4c0da\") " pod="kube-system/storage-provisioner"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: I0920 17:55:46.696091   12613 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: E0920 17:55:46.801873   12613 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: E0920 17:55:46.801891   12613 projected.go:192] Error preparing data for projected volume kube-api-access-ckwrz for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 20 17:55:46 running-upgrade-568000 kubelet[12613]: E0920 17:55:46.801926   12613 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/a5039dac-9ddf-4de7-8ccb-7f500af4c0da-kube-api-access-ckwrz podName:a5039dac-9ddf-4de7-8ccb-7f500af4c0da nodeName:}" failed. No retries permitted until 2024-09-20 17:55:47.30191257 +0000 UTC m=+13.080546681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ckwrz" (UniqueName: "kubernetes.io/projected/a5039dac-9ddf-4de7-8ccb-7f500af4c0da-kube-api-access-ckwrz") pod "storage-provisioner" (UID: "a5039dac-9ddf-4de7-8ccb-7f500af4c0da") : configmap "kube-root-ca.crt" not found
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.571897   12613 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.610887   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08a3cd3a-b74a-418a-8ee9-45a4806ee522-xtables-lock\") pod \"kube-proxy-26pzc\" (UID: \"08a3cd3a-b74a-418a-8ee9-45a4806ee522\") " pod="kube-system/kube-proxy-26pzc"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.610914   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08a3cd3a-b74a-418a-8ee9-45a4806ee522-kube-proxy\") pod \"kube-proxy-26pzc\" (UID: \"08a3cd3a-b74a-418a-8ee9-45a4806ee522\") " pod="kube-system/kube-proxy-26pzc"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.610924   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08a3cd3a-b74a-418a-8ee9-45a4806ee522-lib-modules\") pod \"kube-proxy-26pzc\" (UID: \"08a3cd3a-b74a-418a-8ee9-45a4806ee522\") " pod="kube-system/kube-proxy-26pzc"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.610938   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmv5k\" (UniqueName: \"kubernetes.io/projected/08a3cd3a-b74a-418a-8ee9-45a4806ee522-kube-api-access-xmv5k\") pod \"kube-proxy-26pzc\" (UID: \"08a3cd3a-b74a-418a-8ee9-45a4806ee522\") " pod="kube-system/kube-proxy-26pzc"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.676551   12613 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.685350   12613 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.712253   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce97111-8325-4502-89a1-e8d8ce790dbb-config-volume\") pod \"coredns-6d4b75cb6d-b92jv\" (UID: \"bce97111-8325-4502-89a1-e8d8ce790dbb\") " pod="kube-system/coredns-6d4b75cb6d-b92jv"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.712587   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktsf\" (UniqueName: \"kubernetes.io/projected/85bb3589-8774-4d71-bb30-1275f1706291-kube-api-access-dktsf\") pod \"coredns-6d4b75cb6d-6ww22\" (UID: \"85bb3589-8774-4d71-bb30-1275f1706291\") " pod="kube-system/coredns-6d4b75cb6d-6ww22"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.712602   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bb3589-8774-4d71-bb30-1275f1706291-config-volume\") pod \"coredns-6d4b75cb6d-6ww22\" (UID: \"85bb3589-8774-4d71-bb30-1275f1706291\") " pod="kube-system/coredns-6d4b75cb6d-6ww22"
	Sep 20 17:55:47 running-upgrade-568000 kubelet[12613]: I0920 17:55:47.712613   12613 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7cz\" (UniqueName: \"kubernetes.io/projected/bce97111-8325-4502-89a1-e8d8ce790dbb-kube-api-access-8h7cz\") pod \"coredns-6d4b75cb6d-b92jv\" (UID: \"bce97111-8325-4502-89a1-e8d8ce790dbb\") " pod="kube-system/coredns-6d4b75cb6d-b92jv"
	Sep 20 17:59:35 running-upgrade-568000 kubelet[12613]: I0920 17:59:35.699900   12613 scope.go:110] "RemoveContainer" containerID="9c5c743915c1569ceb1bf13f80edfab9dc807d4e3247d3d1d8480bcaa26628c8"
	Sep 20 17:59:36 running-upgrade-568000 kubelet[12613]: I0920 17:59:36.732557   12613 scope.go:110] "RemoveContainer" containerID="2f543a3a77a1aa6ca5737da14e38839d4bb7573530bb185e2eb447826c53f9c5"
	
	
	==> storage-provisioner [563be55f69cf] <==
	I0920 17:55:47.754275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:55:47.759318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:55:47.759396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:55:47.762906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:55:47.763006       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-568000_85955a90-b010-4d44-b12d-c8254a79550c!
	I0920 17:55:47.763441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38eae211-3795-4605-8c09-611d5d9bdb2a", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-568000_85955a90-b010-4d44-b12d-c8254a79550c became leader
	I0920 17:55:47.863478       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-568000_85955a90-b010-4d44-b12d-c8254a79550c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-568000 -n running-upgrade-568000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-568000 -n running-upgrade-568000: exit status 2 (15.727022041s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-568000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-568000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-568000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-568000: (1.215913375s)
--- FAIL: TestRunningBinaryUpgrade (605.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.844653041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-744000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-744000" primary control-plane node in "kubernetes-upgrade-744000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:53:03.725034    8965 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:53:03.725173    8965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:53:03.725180    8965 out.go:358] Setting ErrFile to fd 2...
	I0920 10:53:03.725183    8965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:53:03.725318    8965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:53:03.726441    8965 out.go:352] Setting JSON to false
	I0920 10:53:03.742902    8965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4946,"bootTime":1726849837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:53:03.743028    8965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:53:03.748905    8965 out.go:177] * [kubernetes-upgrade-744000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:53:03.755706    8965 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:53:03.755759    8965 notify.go:220] Checking for updates...
	I0920 10:53:03.762688    8965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:53:03.765582    8965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:53:03.768664    8965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:53:03.771637    8965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:53:03.773035    8965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:53:03.776091    8965 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:53:03.776157    8965 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:53:03.776205    8965 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:53:03.780629    8965 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:53:03.785623    8965 start.go:297] selected driver: qemu2
	I0920 10:53:03.785628    8965 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:53:03.785634    8965 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:53:03.787986    8965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:53:03.790649    8965 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:53:03.794654    8965 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:53:03.794681    8965 cni.go:84] Creating CNI manager for ""
	I0920 10:53:03.794702    8965 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:53:03.794727    8965 start.go:340] cluster config:
	{Name:kubernetes-upgrade-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:53:03.798183    8965 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:53:03.805696    8965 out.go:177] * Starting "kubernetes-upgrade-744000" primary control-plane node in "kubernetes-upgrade-744000" cluster
	I0920 10:53:03.809685    8965 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:53:03.809703    8965 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:53:03.809710    8965 cache.go:56] Caching tarball of preloaded images
	I0920 10:53:03.809774    8965 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:53:03.809780    8965 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:53:03.809857    8965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kubernetes-upgrade-744000/config.json ...
	I0920 10:53:03.809868    8965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kubernetes-upgrade-744000/config.json: {Name:mkd16083adb3bd813777fcbf9ed682a2e78867e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:53:03.810264    8965 start.go:360] acquireMachinesLock for kubernetes-upgrade-744000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:53:03.810296    8965 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "kubernetes-upgrade-744000"
	I0920 10:53:03.810311    8965 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:53:03.810336    8965 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:53:03.818627    8965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:53:03.834887    8965 start.go:159] libmachine.API.Create for "kubernetes-upgrade-744000" (driver="qemu2")
	I0920 10:53:03.834912    8965 client.go:168] LocalClient.Create starting
	I0920 10:53:03.834980    8965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:53:03.835011    8965 main.go:141] libmachine: Decoding PEM data...
	I0920 10:53:03.835019    8965 main.go:141] libmachine: Parsing certificate...
	I0920 10:53:03.835055    8965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:53:03.835078    8965 main.go:141] libmachine: Decoding PEM data...
	I0920 10:53:03.835093    8965 main.go:141] libmachine: Parsing certificate...
	I0920 10:53:03.835530    8965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:53:04.028840    8965 main.go:141] libmachine: Creating SSH key...
	I0920 10:53:04.089913    8965 main.go:141] libmachine: Creating Disk image...
	I0920 10:53:04.089921    8965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:53:04.090145    8965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:04.100603    8965 main.go:141] libmachine: STDOUT: 
	I0920 10:53:04.100636    8965 main.go:141] libmachine: STDERR: 
	I0920 10:53:04.100709    8965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2 +20000M
	I0920 10:53:04.109806    8965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:53:04.109829    8965 main.go:141] libmachine: STDERR: 
	I0920 10:53:04.109848    8965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:04.109857    8965 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:53:04.109870    8965 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:53:04.109903    8965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:07:ba:20:0b:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:04.111850    8965 main.go:141] libmachine: STDOUT: 
	I0920 10:53:04.111867    8965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:53:04.111887    8965 client.go:171] duration metric: took 276.969541ms to LocalClient.Create
	I0920 10:53:06.113996    8965 start.go:128] duration metric: took 2.303658084s to createHost
	I0920 10:53:06.114073    8965 start.go:83] releasing machines lock for "kubernetes-upgrade-744000", held for 2.303782125s
	W0920 10:53:06.114120    8965 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:53:06.122654    8965 out.go:177] * Deleting "kubernetes-upgrade-744000" in qemu2 ...
	W0920 10:53:06.157230    8965 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:53:06.157252    8965 start.go:729] Will try again in 5 seconds ...
	I0920 10:53:11.159523    8965 start.go:360] acquireMachinesLock for kubernetes-upgrade-744000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:53:11.160192    8965 start.go:364] duration metric: took 537.167µs to acquireMachinesLock for "kubernetes-upgrade-744000"
	I0920 10:53:11.160340    8965 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:53:11.160653    8965 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:53:11.166092    8965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:53:11.218598    8965 start.go:159] libmachine.API.Create for "kubernetes-upgrade-744000" (driver="qemu2")
	I0920 10:53:11.218652    8965 client.go:168] LocalClient.Create starting
	I0920 10:53:11.218801    8965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 10:53:11.218868    8965 main.go:141] libmachine: Decoding PEM data...
	I0920 10:53:11.218885    8965 main.go:141] libmachine: Parsing certificate...
	I0920 10:53:11.218948    8965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 10:53:11.218993    8965 main.go:141] libmachine: Decoding PEM data...
	I0920 10:53:11.219017    8965 main.go:141] libmachine: Parsing certificate...
	I0920 10:53:11.219554    8965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:53:11.396877    8965 main.go:141] libmachine: Creating SSH key...
	I0920 10:53:11.484099    8965 main.go:141] libmachine: Creating Disk image...
	I0920 10:53:11.484104    8965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:53:11.484326    8965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:11.493907    8965 main.go:141] libmachine: STDOUT: 
	I0920 10:53:11.493926    8965 main.go:141] libmachine: STDERR: 
	I0920 10:53:11.493991    8965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2 +20000M
	I0920 10:53:11.502213    8965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:53:11.502228    8965 main.go:141] libmachine: STDERR: 
	I0920 10:53:11.502244    8965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:11.502249    8965 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:53:11.502260    8965 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:53:11.502302    8965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:85:c0:1d:16:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:11.504062    8965 main.go:141] libmachine: STDOUT: 
	I0920 10:53:11.504082    8965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:53:11.504101    8965 client.go:171] duration metric: took 285.443125ms to LocalClient.Create
	I0920 10:53:13.506294    8965 start.go:128] duration metric: took 2.345612542s to createHost
	I0920 10:53:13.506469    8965 start.go:83] releasing machines lock for "kubernetes-upgrade-744000", held for 2.34618525s
	W0920 10:53:13.506816    8965 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:53:13.513885    8965 out.go:201] 
	W0920 10:53:13.521063    8965 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:53:13.521088    8965 out.go:270] * 
	* 
	W0920 10:53:13.522673    8965 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:53:13.533025    8965 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-744000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-744000: (3.358440792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-744000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-744000 status --format={{.Host}}: exit status 7 (61.709167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.196121375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-744000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-744000" primary control-plane node in "kubernetes-upgrade-744000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:53:16.992755    8999 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:53:16.992882    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:53:16.992885    8999 out.go:358] Setting ErrFile to fd 2...
	I0920 10:53:16.992887    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:53:16.993008    8999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:53:16.994075    8999 out.go:352] Setting JSON to false
	I0920 10:53:17.010854    8999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4959,"bootTime":1726849837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:53:17.010923    8999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:53:17.016252    8999 out.go:177] * [kubernetes-upgrade-744000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:53:17.023206    8999 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:53:17.023273    8999 notify.go:220] Checking for updates...
	I0920 10:53:17.031087    8999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:53:17.034185    8999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:53:17.037244    8999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:53:17.040146    8999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:53:17.043219    8999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:53:17.046501    8999 config.go:182] Loaded profile config "kubernetes-upgrade-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:53:17.046756    8999 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:53:17.051133    8999 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:53:17.058214    8999 start.go:297] selected driver: qemu2
	I0920 10:53:17.058221    8999 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:53:17.058291    8999 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:53:17.060586    8999 cni.go:84] Creating CNI manager for ""
	I0920 10:53:17.060624    8999 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:53:17.060649    8999 start.go:340] cluster config:
	{Name:kubernetes-upgrade-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-744000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:53:17.064054    8999 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:53:17.078631    8999 out.go:177] * Starting "kubernetes-upgrade-744000" primary control-plane node in "kubernetes-upgrade-744000" cluster
	I0920 10:53:17.082233    8999 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:53:17.082254    8999 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:53:17.082263    8999 cache.go:56] Caching tarball of preloaded images
	I0920 10:53:17.082335    8999 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:53:17.082340    8999 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:53:17.082400    8999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kubernetes-upgrade-744000/config.json ...
	I0920 10:53:17.082924    8999 start.go:360] acquireMachinesLock for kubernetes-upgrade-744000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:53:17.082950    8999 start.go:364] duration metric: took 20.167µs to acquireMachinesLock for "kubernetes-upgrade-744000"
	I0920 10:53:17.082958    8999 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:53:17.082963    8999 fix.go:54] fixHost starting: 
	I0920 10:53:17.083075    8999 fix.go:112] recreateIfNeeded on kubernetes-upgrade-744000: state=Stopped err=<nil>
	W0920 10:53:17.083083    8999 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:53:17.091025    8999 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-744000" ...
	I0920 10:53:17.095192    8999 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:53:17.095227    8999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:85:c0:1d:16:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:17.097120    8999 main.go:141] libmachine: STDOUT: 
	I0920 10:53:17.097136    8999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:53:17.097165    8999 fix.go:56] duration metric: took 14.201042ms for fixHost
	I0920 10:53:17.097170    8999 start.go:83] releasing machines lock for "kubernetes-upgrade-744000", held for 14.216958ms
	W0920 10:53:17.097176    8999 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:53:17.097219    8999 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:53:17.097223    8999 start.go:729] Will try again in 5 seconds ...
	I0920 10:53:22.099447    8999 start.go:360] acquireMachinesLock for kubernetes-upgrade-744000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:53:22.099982    8999 start.go:364] duration metric: took 406.541µs to acquireMachinesLock for "kubernetes-upgrade-744000"
	I0920 10:53:22.100067    8999 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:53:22.100088    8999 fix.go:54] fixHost starting: 
	I0920 10:53:22.100845    8999 fix.go:112] recreateIfNeeded on kubernetes-upgrade-744000: state=Stopped err=<nil>
	W0920 10:53:22.100871    8999 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:53:22.106163    8999 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-744000" ...
	I0920 10:53:22.114341    8999 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:53:22.114572    8999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:85:c0:1d:16:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubernetes-upgrade-744000/disk.qcow2
	I0920 10:53:22.123878    8999 main.go:141] libmachine: STDOUT: 
	I0920 10:53:22.123945    8999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:53:22.124027    8999 fix.go:56] duration metric: took 23.941166ms for fixHost
	I0920 10:53:22.124052    8999 start.go:83] releasing machines lock for "kubernetes-upgrade-744000", held for 24.045375ms
	W0920 10:53:22.124243    8999 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:53:22.131314    8999 out.go:201] 
	W0920 10:53:22.135166    8999 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:53:22.135194    8999 out.go:270] * 
	* 
	W0920 10:53:22.137006    8999 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:53:22.145218    8999 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-744000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-744000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-744000 version --output=json: exit status 1 (62.289667ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-744000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-20 10:53:22.223299 -0700 PDT m=+899.002395417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-744000 -n kubernetes-upgrade-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-744000 -n kubernetes-upgrade-744000: exit status 7 (33.086083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-744000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-744000
--- FAIL: TestKubernetesUpgrade (18.64s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19678
- KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2863936608/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19678
- KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2959261283/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3524279172 start -p stopped-upgrade-423000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3524279172 start -p stopped-upgrade-423000 --memory=2200 --vm-driver=qemu2 : (40.198900917s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3524279172 -p stopped-upgrade-423000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3524279172 -p stopped-upgrade-423000 stop: (12.089622667s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.568466s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-423000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:54:15.644221    9036 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:54:15.644373    9036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:54:15.644377    9036 out.go:358] Setting ErrFile to fd 2...
	I0920 10:54:15.644380    9036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:54:15.644507    9036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:54:15.645634    9036 out.go:352] Setting JSON to false
	I0920 10:54:15.663750    9036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5018,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:54:15.663866    9036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:54:15.668896    9036 out.go:177] * [stopped-upgrade-423000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:54:15.676866    9036 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:54:15.676909    9036 notify.go:220] Checking for updates...
	I0920 10:54:15.684840    9036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:54:15.687880    9036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:54:15.690896    9036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:54:15.693816    9036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:54:15.696882    9036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:54:15.700177    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:54:15.702857    9036 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:54:15.705859    9036 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:54:15.709794    9036 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:54:15.716839    9036 start.go:297] selected driver: qemu2
	I0920 10:54:15.716844    9036 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:15.716895    9036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:54:15.719696    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:54:15.719735    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:54:15.719753    9036 start.go:340] cluster config:
	{Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:15.719809    9036 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:54:15.727811    9036 out.go:177] * Starting "stopped-upgrade-423000" primary control-plane node in "stopped-upgrade-423000" cluster
	I0920 10:54:15.731822    9036 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:54:15.731854    9036 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:54:15.731861    9036 cache.go:56] Caching tarball of preloaded images
	I0920 10:54:15.731939    9036 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:54:15.731946    9036 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:54:15.731997    9036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0920 10:54:15.732397    9036 start.go:360] acquireMachinesLock for stopped-upgrade-423000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:54:15.732431    9036 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "stopped-upgrade-423000"
	I0920 10:54:15.732440    9036 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:54:15.732446    9036 fix.go:54] fixHost starting: 
	I0920 10:54:15.732553    9036 fix.go:112] recreateIfNeeded on stopped-upgrade-423000: state=Stopped err=<nil>
	W0920 10:54:15.732561    9036 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:54:15.736774    9036 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-423000" ...
	I0920 10:54:15.744751    9036 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:54:15.744847    9036 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51506-:22,hostfwd=tcp::51507-:2376,hostname=stopped-upgrade-423000 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/disk.qcow2
	I0920 10:54:15.794993    9036 main.go:141] libmachine: STDOUT: 
	I0920 10:54:15.795019    9036 main.go:141] libmachine: STDERR: 
	I0920 10:54:15.795026    9036 main.go:141] libmachine: Waiting for VM to start (ssh -p 51506 docker@127.0.0.1)...
	I0920 10:54:35.801924    9036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/config.json ...
	I0920 10:54:35.802835    9036 machine.go:93] provisionDockerMachine start ...
	I0920 10:54:35.803228    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.803777    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.803793    9036 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:54:35.877551    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:54:35.877586    9036 buildroot.go:166] provisioning hostname "stopped-upgrade-423000"
	I0920 10:54:35.877721    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.877984    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.877995    9036 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-423000 && echo "stopped-upgrade-423000" | sudo tee /etc/hostname
	I0920 10:54:35.944348    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-423000
	
	I0920 10:54:35.944454    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:35.944656    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:35.944670    9036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-423000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-423000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-423000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:54:36.000537    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:54:36.000552    9036 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19678-6679/.minikube CaCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19678-6679/.minikube}
	I0920 10:54:36.000562    9036 buildroot.go:174] setting up certificates
	I0920 10:54:36.000567    9036 provision.go:84] configureAuth start
	I0920 10:54:36.000571    9036 provision.go:143] copyHostCerts
	I0920 10:54:36.000659    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem, removing ...
	I0920 10:54:36.000667    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem
	I0920 10:54:36.000831    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/cert.pem (1123 bytes)
	I0920 10:54:36.001051    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem, removing ...
	I0920 10:54:36.001056    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem
	I0920 10:54:36.001119    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/key.pem (1675 bytes)
	I0920 10:54:36.001255    9036 exec_runner.go:144] found /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem, removing ...
	I0920 10:54:36.001259    9036 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem
	I0920 10:54:36.001341    9036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.pem (1078 bytes)
	I0920 10:54:36.001446    9036 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-423000 san=[127.0.0.1 localhost minikube stopped-upgrade-423000]
	I0920 10:54:36.157516    9036 provision.go:177] copyRemoteCerts
	I0920 10:54:36.157575    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:54:36.157587    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.185054    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:54:36.191816    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 10:54:36.199189    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:54:36.206159    9036 provision.go:87] duration metric: took 205.583458ms to configureAuth
	I0920 10:54:36.206169    9036 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:54:36.206295    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:54:36.206344    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.206431    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.206436    9036 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:54:36.256794    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:54:36.256804    9036 buildroot.go:70] root file system type: tmpfs
	I0920 10:54:36.256861    9036 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:54:36.256918    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.257028    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.257065    9036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:54:36.312513    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:54:36.312573    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.312682    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.312698    9036 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:54:36.671447    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:54:36.671461    9036 machine.go:96] duration metric: took 868.618792ms to provisionDockerMachine
	I0920 10:54:36.671467    9036 start.go:293] postStartSetup for "stopped-upgrade-423000" (driver="qemu2")
	I0920 10:54:36.671474    9036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:54:36.671543    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:54:36.671553    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.699766    9036 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:54:36.701111    9036 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:54:36.701117    9036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/addons for local assets ...
	I0920 10:54:36.701207    9036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19678-6679/.minikube/files for local assets ...
	I0920 10:54:36.701345    9036 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem -> 71912.pem in /etc/ssl/certs
	I0920 10:54:36.701485    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:54:36.704091    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:54:36.711188    9036 start.go:296] duration metric: took 39.716167ms for postStartSetup
	I0920 10:54:36.711204    9036 fix.go:56] duration metric: took 20.978870541s for fixHost
	I0920 10:54:36.711243    9036 main.go:141] libmachine: Using SSH client type: native
	I0920 10:54:36.711346    9036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104981c00] 0x104984440 <nil>  [] 0s} localhost 51506 <nil> <nil>}
	I0920 10:54:36.711351    9036 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:54:36.760934    9036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854877.180929837
	
	I0920 10:54:36.760941    9036 fix.go:216] guest clock: 1726854877.180929837
	I0920 10:54:36.760944    9036 fix.go:229] Guest: 2024-09-20 10:54:37.180929837 -0700 PDT Remote: 2024-09-20 10:54:36.711206 -0700 PDT m=+21.088847418 (delta=469.723837ms)
	I0920 10:54:36.760955    9036 fix.go:200] guest clock delta is within tolerance: 469.723837ms
	I0920 10:54:36.760957    9036 start.go:83] releasing machines lock for "stopped-upgrade-423000", held for 21.028633667s
	I0920 10:54:36.761029    9036 ssh_runner.go:195] Run: cat /version.json
	I0920 10:54:36.761034    9036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:54:36.761051    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:54:36.761052    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	W0920 10:54:36.761597    9036 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51506: connect: connection refused
	I0920 10:54:36.761615    9036 retry.go:31] will retry after 356.261874ms: dial tcp [::1]:51506: connect: connection refused
	W0920 10:54:37.164759    9036 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:54:37.164941    9036 ssh_runner.go:195] Run: systemctl --version
	I0920 10:54:37.168613    9036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:54:37.171997    9036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:54:37.172054    9036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:54:37.177217    9036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:54:37.184232    9036 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:54:37.184245    9036 start.go:495] detecting cgroup driver to use...
	I0920 10:54:37.184353    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:54:37.193850    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:54:37.197557    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:54:37.201019    9036 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:54:37.201057    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:54:37.204480    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:54:37.208034    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:54:37.211531    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:54:37.214611    9036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:54:37.217401    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:54:37.220107    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:54:37.223391    9036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:54:37.226615    9036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:54:37.229029    9036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:54:37.231929    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:37.315576    9036 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:54:37.321894    9036 start.go:495] detecting cgroup driver to use...
	I0920 10:54:37.321967    9036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:54:37.327421    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:54:37.332551    9036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:54:37.342722    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:54:37.347296    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:54:37.351860    9036 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:54:37.411531    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:54:37.416513    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:54:37.421712    9036 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:54:37.422875    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:54:37.425311    9036 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:54:37.430057    9036 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:54:37.517375    9036 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:54:37.608675    9036 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:54:37.608739    9036 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:54:37.613943    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:37.679251    9036 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:54:38.821318    9036 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.142053375s)
	I0920 10:54:38.821389    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:54:38.826128    9036 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:54:38.832776    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:54:38.837639    9036 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:54:38.924825    9036 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:54:39.007721    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:39.083392    9036 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:54:39.089360    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:54:39.094005    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:39.162910    9036 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:54:39.201506    9036 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:54:39.201601    9036 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:54:39.204203    9036 start.go:563] Will wait 60s for crictl version
	I0920 10:54:39.204269    9036 ssh_runner.go:195] Run: which crictl
	I0920 10:54:39.205725    9036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:54:39.219719    9036 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:54:39.219812    9036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:54:39.235312    9036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:54:39.253528    9036 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:54:39.253609    9036 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:54:39.254871    9036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:54:39.259169    9036 kubeadm.go:883] updating cluster {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:54:39.259218    9036 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:54:39.259275    9036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:54:39.274953    9036 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:54:39.274963    9036 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:54:39.275018    9036 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:54:39.278375    9036 ssh_runner.go:195] Run: which lz4
	I0920 10:54:39.279871    9036 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:54:39.281377    9036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:54:39.281392    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:54:40.264238    9036 docker.go:649] duration metric: took 984.419834ms to copy over tarball
	I0920 10:54:40.264308    9036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:54:41.418720    9036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.154403166s)
	I0920 10:54:41.418734    9036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:54:41.434217    9036 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:54:41.437059    9036 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:54:41.441940    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:41.526287    9036 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:54:43.221501    9036 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.695207542s)
	I0920 10:54:43.221604    9036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:54:43.234820    9036 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:54:43.234829    9036 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:54:43.234834    9036 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:54:43.241194    9036 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:43.242147    9036 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.244230    9036 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.244328    9036 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:43.245656    9036 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.245773    9036 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.247081    9036 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.247582    9036 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.248593    9036 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.248673    9036 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.249790    9036 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.250083    9036 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.251256    9036 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:54:43.251539    9036 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.252547    9036 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.253037    9036 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:54:43.699999    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.701753    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.703616    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.706504    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.716148    9036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:54:43.716171    9036 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.716247    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:54:43.736320    9036 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:54:43.736343    9036 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.736374    9036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:54:43.736386    9036 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.736342    9036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:54:43.736405    9036 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:54:43.736409    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:54:43.736422    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:54:43.736444    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0920 10:54:43.740789    9036 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:54:43.740945    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.741709    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.747931    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:54:43.768935    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:54:43.772652    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:54:43.772652    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:54:43.772709    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:54:43.772727    9036 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:54:43.772750    9036 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.772779    9036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:54:43.772792    9036 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.772798    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:54:43.772795    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:54:43.772826    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:54:43.784073    9036 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:54:43.784096    9036 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:54:43.784162    9036 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:54:43.792053    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:54:43.792063    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:54:43.792086    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:54:43.792103    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:54:43.792182    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:54:43.803293    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:54:43.803424    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:54:43.803903    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:54:43.803914    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:54:43.813169    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:54:43.813198    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:54:43.848916    9036 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:54:43.848929    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:54:43.933411    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:54:43.933472    9036 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:54:43.933480    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:54:44.058319    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0920 10:54:44.104736    9036 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:54:44.104856    9036 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.118439    9036 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:54:44.118453    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0920 10:54:44.127411    9036 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:54:44.127446    9036 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.127518    9036 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:54:44.258722    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:54:44.258746    9036 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:54:44.258877    9036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:54:44.260424    9036 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:54:44.260438    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:54:44.289488    9036 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:54:44.289509    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:54:44.518061    9036 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:54:44.518098    9036 cache_images.go:92] duration metric: took 1.28326375s to LoadCachedImages
	W0920 10:54:44.518133    9036 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0920 10:54:44.518138    9036 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:54:44.518193    9036 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-423000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:54:44.518269    9036 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:54:44.531867    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:54:44.531886    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:54:44.531891    9036 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:54:44.531899    9036 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-423000 NodeName:stopped-upgrade-423000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:54:44.531973    9036 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-423000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:54:44.532035    9036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:54:44.535407    9036 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:54:44.535434    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:54:44.538345    9036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:54:44.543372    9036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:54:44.548342    9036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:54:44.553745    9036 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:54:44.554927    9036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:54:44.558645    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:54:44.634949    9036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:54:44.641514    9036 certs.go:68] Setting up /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000 for IP: 10.0.2.15
	I0920 10:54:44.641528    9036 certs.go:194] generating shared ca certs ...
	I0920 10:54:44.641538    9036 certs.go:226] acquiring lock for ca certs: {Name:mkeda31d83c21edf6ebc3767ef11bc03f6f18a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.641714    9036 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key
	I0920 10:54:44.641766    9036 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key
	I0920 10:54:44.641772    9036 certs.go:256] generating profile certs ...
	I0920 10:54:44.641849    9036 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key
	I0920 10:54:44.641867    9036 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81
	I0920 10:54:44.641877    9036 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:54:44.813213    9036 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 ...
	I0920 10:54:44.813227    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81: {Name:mk907fabc7f6e8ab3ba7b6f06cfcdc116f1a9698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.813574    9036 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 ...
	I0920 10:54:44.813578    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81: {Name:mkc3da0abb71653cc5ab3b57f0e66ae346ec6554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.813713    9036 certs.go:381] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt.a2b51d81 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt
	I0920 10:54:44.813860    9036 certs.go:385] copying /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key.a2b51d81 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key
	I0920 10:54:44.814023    9036 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.key
	I0920 10:54:44.814160    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem (1338 bytes)
	W0920 10:54:44.814189    9036 certs.go:480] ignoring /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191_empty.pem, impossibly tiny 0 bytes
	I0920 10:54:44.814194    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:54:44.814224    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:54:44.814243    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:54:44.814262    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/key.pem (1675 bytes)
	I0920 10:54:44.814302    9036 certs.go:484] found cert: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem (1708 bytes)
	I0920 10:54:44.814632    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:54:44.821860    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 10:54:44.828317    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:54:44.835534    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 10:54:44.842831    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:54:44.849778    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 10:54:44.856285    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:54:44.863504    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:54:44.871014    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/7191.pem --> /usr/share/ca-certificates/7191.pem (1338 bytes)
	I0920 10:54:44.878094    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/ssl/certs/71912.pem --> /usr/share/ca-certificates/71912.pem (1708 bytes)
	I0920 10:54:44.884684    9036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:54:44.891792    9036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:54:44.896942    9036 ssh_runner.go:195] Run: openssl version
	I0920 10:54:44.898790    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7191.pem && ln -fs /usr/share/ca-certificates/7191.pem /etc/ssl/certs/7191.pem"
	I0920 10:54:44.901719    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.903148    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:39 /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.903171    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7191.pem
	I0920 10:54:44.905435    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7191.pem /etc/ssl/certs/51391683.0"
	I0920 10:54:44.908456    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71912.pem && ln -fs /usr/share/ca-certificates/71912.pem /etc/ssl/certs/71912.pem"
	I0920 10:54:44.911800    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.913317    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:39 /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.913335    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71912.pem
	I0920 10:54:44.914900    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71912.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:54:44.917583    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:54:44.920530    9036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.921857    9036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:50 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.921882    9036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:54:44.923567    9036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:54:44.926782    9036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:54:44.928283    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:54:44.930148    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:54:44.931905    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:54:44.933720    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:54:44.935477    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:54:44.937278    9036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:54:44.938981    9036 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51540 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:54:44.939062    9036 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:54:44.948942    9036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:54:44.952379    9036 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:54:44.952391    9036 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:54:44.952420    9036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:54:44.956072    9036 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:54:44.956391    9036 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-423000" does not appear in /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:54:44.956486    9036 kubeconfig.go:62] /Users/jenkins/minikube-integration/19678-6679/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-423000" cluster setting kubeconfig missing "stopped-upgrade-423000" context setting]
	I0920 10:54:44.956676    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:54:44.957074    9036 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:54:44.957412    9036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:54:44.960272    9036 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-423000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:54:44.960277    9036 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:54:44.960322    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:54:44.970871    9036 docker.go:483] Stopping containers: [679ec37c5db9 bbc78c4773e8 aceabc06111c 1619d098154d 3f14f3112347 1fca3ed6d070 61a375dec486 650308392c15]
	I0920 10:54:44.970954    9036 ssh_runner.go:195] Run: docker stop 679ec37c5db9 bbc78c4773e8 aceabc06111c 1619d098154d 3f14f3112347 1fca3ed6d070 61a375dec486 650308392c15
	I0920 10:54:44.981684    9036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:54:44.987353    9036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:54:44.990240    9036 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:54:44.990246    9036 kubeadm.go:157] found existing configuration files:
	
	I0920 10:54:44.990272    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf
	I0920 10:54:44.992754    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:54:44.992778    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:54:44.995992    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf
	I0920 10:54:44.999002    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:54:44.999033    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:54:45.001555    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf
	I0920 10:54:45.004100    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:54:45.004127    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:54:45.007141    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf
	I0920 10:54:45.009768    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:54:45.009791    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:54:45.012203    9036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:54:45.015289    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.037030    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.769596    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.896634    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.919693    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:54:45.940445    9036 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:54:45.940530    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.442387    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.942568    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:54:46.947130    9036 api_server.go:72] duration metric: took 1.006691833s to wait for apiserver process to appear ...
	I0920 10:54:46.947140    9036 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:54:46.947150    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:51.948658    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:51.948734    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:56.949328    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:56.949390    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:01.949795    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:01.949838    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:06.950526    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:06.950586    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:11.951274    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:11.951318    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:16.952378    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:16.952509    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:21.954063    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:21.954100    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:26.955705    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:26.955747    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:31.957691    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:31.957726    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:36.959969    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:36.960011    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:41.962320    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:41.962359    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:46.963633    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:46.963833    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:46.976813    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:55:46.976914    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:46.988423    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:55:46.988511    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:46.999483    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:55:46.999576    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:47.015155    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:55:47.015226    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:47.026271    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:55:47.026346    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:47.038094    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:55:47.038172    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:47.048526    9036 logs.go:276] 0 containers: []
	W0920 10:55:47.048544    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:47.048605    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:47.059675    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:55:47.059693    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:55:47.059698    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:55:47.071198    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:55:47.071208    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:55:47.089983    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:55:47.090000    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:47.103053    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:47.103065    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:47.142321    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:47.142339    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:47.251450    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:55:47.251467    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:55:47.269253    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:55:47.269265    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:55:47.284401    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:55:47.284414    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:55:47.297843    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:47.297856    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:47.302150    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:55:47.302156    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:55:47.312979    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:47.312989    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:47.339493    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:55:47.339507    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:55:47.351404    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:55:47.351414    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:55:47.392904    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:55:47.392920    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:55:47.406984    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:55:47.407000    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:55:47.423560    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:55:47.423571    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:55:47.436986    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:55:47.436998    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:55:49.951833    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:54.953164    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:54.953375    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:54.973792    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:55:54.973899    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:54.988383    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:55:54.988476    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:55.000261    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:55:55.000331    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:55.014933    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:55:55.015029    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:55.025661    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:55:55.025739    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:55.036010    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:55:55.036101    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:55.046309    9036 logs.go:276] 0 containers: []
	W0920 10:55:55.046320    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:55.046384    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:55.056765    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:55:55.056784    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:55.056790    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:55.097249    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:55:55.097259    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:55:55.135456    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:55:55.135468    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:55:55.146858    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:55:55.146870    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:55:55.161944    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:55:55.161958    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:55:55.173369    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:55:55.173380    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:55:55.187186    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:55:55.187195    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:55:55.201765    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:55:55.201779    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:55:55.215640    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:55:55.215653    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:55:55.233298    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:55:55.233309    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:55:55.248574    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:55:55.248584    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:55.260577    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:55.260588    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:55.297439    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:55.297448    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:55.301524    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:55:55.301530    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:55:55.315374    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:55:55.315382    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:55:55.328830    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:55:55.328845    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:55:55.349654    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:55.349666    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:57.875649    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:02.877867    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:02.878114    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:02.901941    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:02.902045    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:02.915992    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:02.916086    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:02.926940    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:02.927022    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:02.937091    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:02.937180    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:02.947517    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:02.947589    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:02.958095    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:02.958176    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:02.968401    9036 logs.go:276] 0 containers: []
	W0920 10:56:02.968414    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:02.968489    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:02.978913    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:02.978929    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:02.978936    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:03.017670    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:03.017683    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:03.034048    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:03.034059    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:03.049488    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:03.049500    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:03.064665    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:03.064676    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:03.077879    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:03.077890    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:03.093851    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:03.093865    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:03.132082    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:03.132090    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:03.150013    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:03.150023    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:03.161318    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:03.161329    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:03.186472    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:03.186482    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:03.202294    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:03.202304    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:03.214145    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:03.214156    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:03.218341    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:03.218350    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:03.253810    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:03.253822    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:03.273351    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:03.273362    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:03.291861    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:03.291870    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:05.805840    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:10.808487    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:10.808713    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:10.825324    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:10.825424    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:10.838040    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:10.838124    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:10.850222    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:10.850306    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:10.860762    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:10.860855    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:10.872058    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:10.872133    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:10.882972    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:10.883055    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:10.893479    9036 logs.go:276] 0 containers: []
	W0920 10:56:10.893493    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:10.893565    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:10.903686    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:10.903708    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:10.903715    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:10.917788    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:10.917799    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:10.931394    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:10.931409    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:10.945902    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:10.945911    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:10.957135    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:10.957146    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:10.995959    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:10.995975    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:11.006868    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:11.006878    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:11.011175    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:11.011185    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:11.049080    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:11.049091    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:11.061368    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:11.061379    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:11.072742    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:11.072754    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:11.097871    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:11.097887    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:11.111746    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:11.111755    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:11.150117    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:11.150146    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:11.169025    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:11.169039    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:11.180603    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:11.180612    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:11.197814    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:11.197824    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:13.713995    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:18.716446    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:18.716714    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:18.737630    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:18.737754    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:18.751852    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:18.751943    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:18.764019    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:18.764100    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:18.774524    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:18.774612    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:18.793638    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:18.793724    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:18.804309    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:18.804388    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:18.814728    9036 logs.go:276] 0 containers: []
	W0920 10:56:18.814740    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:18.814811    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:18.824999    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:18.825021    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:18.825026    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:18.861421    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:18.861435    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:18.872577    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:18.872587    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:18.889801    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:18.889813    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:18.912820    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:18.912828    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:18.949030    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:18.949039    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:18.962913    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:18.962927    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:18.974208    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:18.974216    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:18.991571    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:18.991581    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:18.995777    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:18.995784    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:19.036980    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:19.037006    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:19.048411    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:19.048422    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:19.059777    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:19.059791    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:19.072135    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:19.072148    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:19.087891    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:19.087900    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:19.104478    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:19.104489    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:19.119117    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:19.119131    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:21.634605    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:26.636855    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:26.637036    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:26.654329    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:26.654429    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:26.670636    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:26.670719    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:26.691096    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:26.691176    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:26.701293    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:26.701374    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:26.711872    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:26.711951    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:26.722879    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:26.722957    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:26.738466    9036 logs.go:276] 0 containers: []
	W0920 10:56:26.738478    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:26.738543    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:26.749760    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:26.749780    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:26.749788    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:26.774534    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:26.774541    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:26.786160    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:26.786172    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:26.797617    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:26.797627    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:26.808470    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:26.808481    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:26.820090    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:26.820099    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:26.857600    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:26.857611    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:26.872840    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:26.872850    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:26.907533    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:26.907545    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:26.922063    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:26.922078    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:26.940573    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:26.940584    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:26.955832    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:26.955843    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:26.973707    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:26.973717    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:27.012761    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:27.012769    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:27.017039    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:27.017045    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:27.030390    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:27.030400    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:27.044480    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:27.044489    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:29.557961    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:34.560286    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:34.560492    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:34.573646    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:34.573744    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:34.584483    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:34.584567    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:34.595064    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:34.595150    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:34.605520    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:34.605603    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:34.615802    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:34.615891    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:34.627774    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:34.627865    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:34.640062    9036 logs.go:276] 0 containers: []
	W0920 10:56:34.640078    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:34.640154    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:34.656540    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:34.656559    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:34.656565    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:34.672998    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:34.673010    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:34.697780    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:34.697794    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:34.702393    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:34.702409    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:34.723496    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:34.723508    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:34.743928    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:34.743940    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:34.757235    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:34.757247    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:34.769861    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:34.769872    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:34.784006    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:34.784018    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:34.798528    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:34.798542    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:34.811270    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:34.811282    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:34.851209    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:34.851221    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:34.865301    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:34.865312    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:34.888924    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:34.888934    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:34.902968    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:34.902979    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:34.942554    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:34.942565    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:34.980940    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:34.980953    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:37.495267    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:42.497518    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:42.497655    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:42.511644    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:42.511738    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:42.523586    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:42.523669    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:42.534379    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:42.534438    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:42.547239    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:42.547317    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:42.558673    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:42.558748    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:42.571721    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:42.571807    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:42.582978    9036 logs.go:276] 0 containers: []
	W0920 10:56:42.582990    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:42.583056    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:42.593979    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:42.593999    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:42.594007    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:42.608870    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:42.608885    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:42.621677    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:42.621691    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:42.638138    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:42.638149    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:42.656514    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:42.656532    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:42.671838    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:42.671855    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:42.687595    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:42.687607    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:42.714690    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:42.714713    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:42.727735    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:42.727748    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:42.768706    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:42.768718    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:42.773490    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:42.773502    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:42.788916    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:42.788927    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:42.802628    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:42.802643    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:42.815015    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:42.815026    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:42.852640    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:42.852651    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:42.890354    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:42.890366    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:42.901599    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:42.901609    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:45.414969    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:50.417189    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:50.417282    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:50.428944    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:50.429042    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:50.440044    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:50.440128    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:50.452186    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:50.452266    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:50.463678    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:50.463764    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:50.475029    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:50.475108    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:50.486417    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:50.486498    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:50.497559    9036 logs.go:276] 0 containers: []
	W0920 10:56:50.497570    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:50.497642    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:50.510756    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:50.510773    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:50.510778    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:56:50.527606    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:50.527615    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:50.540305    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:50.540317    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:50.552924    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:50.552933    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:50.591583    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:50.591593    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:50.632485    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:50.632494    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:50.674828    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:50.674840    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:50.689280    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:50.689297    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:50.702541    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:50.702552    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:50.725495    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:50.725502    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:50.729341    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:50.729347    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:50.740562    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:50.740576    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:50.753105    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:50.753117    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:50.764945    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:50.764955    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:50.793082    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:50.793092    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:50.816345    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:50.816358    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:50.833426    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:50.833437    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:53.347732    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:58.350084    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:58.350186    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:58.361839    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:56:58.361921    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:58.372794    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:56:58.372879    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:58.384342    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:56:58.384424    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:58.396434    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:56:58.396522    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:58.410489    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:56:58.410570    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:58.421835    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:56:58.421921    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:58.432943    9036 logs.go:276] 0 containers: []
	W0920 10:56:58.432955    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:58.433033    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:58.446380    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:56:58.446399    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:58.446406    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:58.450756    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:56:58.450766    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:56:58.466282    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:56:58.466293    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:56:58.504688    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:56:58.504704    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:56:58.519849    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:56:58.519862    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:56:58.534913    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:58.534928    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:58.557433    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:56:58.557440    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:58.571031    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:58.571041    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:58.607678    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:58.607686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:58.642279    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:56:58.642292    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:56:58.653681    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:56:58.653693    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:56:58.665312    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:56:58.665325    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:56:58.677205    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:56:58.677215    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:56:58.694677    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:56:58.694687    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:56:58.708452    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:56:58.708468    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:56:58.724030    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:56:58.724042    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:56:58.743248    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:56:58.743261    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:01.267928    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:06.270207    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:06.270307    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:06.283006    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:06.283095    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:06.294983    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:06.295069    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:06.307379    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:06.307467    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:06.318637    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:06.318721    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:06.329913    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:06.330001    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:06.342976    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:06.343066    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:06.355025    9036 logs.go:276] 0 containers: []
	W0920 10:57:06.355038    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:06.355113    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:06.370880    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:06.370897    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:06.370902    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:06.386633    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:06.386646    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:06.398648    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:06.398663    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:06.422206    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:06.422213    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:06.440655    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:06.440667    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:06.476273    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:06.476287    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:06.491119    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:06.491130    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:06.502825    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:06.502836    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:06.514153    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:06.514166    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:06.527531    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:06.527542    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:06.539808    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:06.539823    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:06.576115    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:06.576125    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:06.580645    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:06.580654    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:06.619087    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:06.619104    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:06.639586    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:06.639596    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:06.653183    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:06.653193    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:06.664779    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:06.664789    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:09.178011    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:14.180250    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:14.180371    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:14.191845    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:14.191931    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:14.203152    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:14.203234    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:14.214497    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:14.214590    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:14.225320    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:14.225403    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:14.236252    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:14.236339    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:14.246821    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:14.246905    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:14.258888    9036 logs.go:276] 0 containers: []
	W0920 10:57:14.258903    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:14.258976    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:14.269155    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:14.269175    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:14.269182    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:14.281415    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:14.281426    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:14.293348    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:14.293364    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:14.307753    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:14.307763    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:14.322096    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:14.322110    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:14.339419    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:14.339429    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:14.375962    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:14.375973    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:14.390432    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:14.390442    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:14.428029    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:14.428046    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:14.439464    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:14.439476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:14.451724    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:14.451736    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:14.466658    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:14.466672    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:14.506281    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:14.506293    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:14.510401    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:14.510410    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:14.532789    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:14.532798    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:14.544669    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:14.544680    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:14.560635    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:14.560647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:17.075397    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:22.076830    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:22.076935    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:22.093772    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:22.093861    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:22.104819    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:22.104900    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:22.115577    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:22.115649    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:22.125867    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:22.125937    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:22.136018    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:22.136089    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:22.147946    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:22.148028    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:22.157839    9036 logs.go:276] 0 containers: []
	W0920 10:57:22.157857    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:22.157923    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:22.168478    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:22.168496    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:22.168502    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:22.173035    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:22.173040    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:22.187516    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:22.187525    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:22.202292    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:22.202302    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:22.213359    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:22.213373    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:22.227697    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:22.227706    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:22.241106    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:22.241116    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:22.252728    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:22.252739    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:22.289051    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:22.289062    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:22.326976    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:22.326988    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:22.342759    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:22.342773    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:22.356225    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:22.356240    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:22.367678    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:22.367688    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:22.404486    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:22.404494    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:22.418100    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:22.418109    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:22.430212    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:22.430223    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:22.447366    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:22.447377    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:24.973336    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:29.975647    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:29.975747    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:29.987633    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:29.987716    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:29.998081    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:29.998174    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:30.012523    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:30.012610    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:30.023145    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:30.023231    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:30.034098    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:30.034182    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:30.045454    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:30.045536    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:30.055753    9036 logs.go:276] 0 containers: []
	W0920 10:57:30.055764    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:30.055833    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:30.066346    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:30.066366    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:30.066372    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:30.101634    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:30.101647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:30.115459    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:30.115473    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:30.130981    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:30.130992    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:30.154457    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:30.154464    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:30.166044    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:30.166054    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:30.170480    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:30.170486    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:30.208519    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:30.208529    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:30.225871    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:30.225880    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:30.239027    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:30.239037    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:30.253208    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:30.253218    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:30.267927    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:30.267937    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:30.278804    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:30.278815    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:30.294438    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:30.294449    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:30.332981    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:30.332988    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:30.345269    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:30.345283    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:30.357349    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:30.357360    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:32.871588    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:37.873844    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:37.873963    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:37.886718    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:37.886802    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:37.897655    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:37.897737    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:37.908281    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:37.908372    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:37.918502    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:37.918583    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:37.928726    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:37.928809    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:37.940208    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:37.940297    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:37.950149    9036 logs.go:276] 0 containers: []
	W0920 10:57:37.950160    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:37.950231    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:37.960478    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:37.960497    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:37.960502    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:37.971886    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:37.971901    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:37.985261    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:37.985271    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:37.999199    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:37.999212    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:38.013125    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:38.013136    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:38.027474    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:38.027484    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:38.045721    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:38.045732    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:38.057166    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:38.057177    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:38.079611    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:38.079618    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:38.091302    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:38.091316    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:38.126800    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:38.126815    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:38.165023    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:38.165034    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:38.176724    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:38.176732    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:38.188135    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:38.188145    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:38.224261    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:38.224272    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:38.239433    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:38.239445    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:38.256476    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:38.256487    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:40.762594    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:45.764976    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:45.765091    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:45.775735    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:45.775824    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:45.786545    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:45.786631    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:45.797462    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:45.797542    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:45.808808    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:45.808888    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:45.820114    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:45.820198    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:45.831561    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:45.831648    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:45.842245    9036 logs.go:276] 0 containers: []
	W0920 10:57:45.842257    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:45.842324    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:45.854338    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:45.854357    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:45.854362    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:45.894645    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:45.894666    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:45.913218    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:45.913233    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:45.929805    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:45.929821    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:45.954160    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:45.954172    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:45.958679    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:45.958686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:45.997779    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:45.997790    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:46.012209    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:46.012223    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:46.026400    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:46.026411    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:46.038172    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:46.038183    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:46.058464    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:46.058476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:46.070998    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:46.071008    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:46.082781    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:46.082794    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:46.100352    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:46.100366    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:46.112528    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:46.112540    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:46.124866    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:46.124877    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:46.163869    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:46.163884    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:48.676400    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:53.678747    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:53.678941    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:53.693238    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:57:53.693335    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:53.704965    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:57:53.705057    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:53.715623    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:57:53.715709    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:53.726670    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:57:53.726755    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:53.737407    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:57:53.737475    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:53.748254    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:57:53.748332    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:53.759975    9036 logs.go:276] 0 containers: []
	W0920 10:57:53.759987    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:53.760056    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:53.770821    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:57:53.770837    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:57:53.770842    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:57:53.783485    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:57:53.783496    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:57:53.798861    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:57:53.798873    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:57:53.836977    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:57:53.836990    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:57:53.848041    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:57:53.848053    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:57:53.859565    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:57:53.859578    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:57:53.871353    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:57:53.871364    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:57:53.883020    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:53.883030    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:53.905217    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:53.905224    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:53.941699    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:57:53.941710    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:57:53.956643    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:57:53.956651    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:57:53.973622    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:57:53.973640    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:57:53.994308    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:57:53.994319    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:54.006332    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:54.006347    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:54.011180    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:54.011188    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:54.045853    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:57:54.045867    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:57:54.063699    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:57:54.063709    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:57:56.588165    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:01.590407    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:01.590598    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:01.605226    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:01.605312    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:01.616791    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:01.616873    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:01.627409    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:01.627484    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:01.638823    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:01.638906    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:01.649465    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:01.649542    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:01.660399    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:01.660471    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:01.671258    9036 logs.go:276] 0 containers: []
	W0920 10:58:01.671277    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:01.671353    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:01.682094    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:01.682112    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:01.682119    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:01.693998    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:01.694010    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:01.715341    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:01.715351    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:01.729553    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:01.729566    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:01.741343    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:01.741354    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:01.759464    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:01.759476    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:01.773117    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:01.773130    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:01.797161    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:01.797167    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:01.832316    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:01.832331    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:01.844180    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:01.844192    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:01.856989    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:01.857000    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:01.872621    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:01.872635    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:01.877186    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:01.877194    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:01.914604    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:01.914615    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:01.929280    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:01.929291    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:01.941421    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:01.941433    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:01.952243    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:01.952255    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:04.490905    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:09.493101    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:09.493278    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:09.504400    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:09.504492    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:09.514846    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:09.514935    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:09.529515    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:09.529596    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:09.540162    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:09.540249    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:09.553661    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:09.553741    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:09.566060    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:09.566146    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:09.576987    9036 logs.go:276] 0 containers: []
	W0920 10:58:09.577000    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:09.577077    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:09.587253    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:09.587271    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:09.587276    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:09.602175    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:09.602185    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:09.619296    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:09.619307    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:09.633496    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:09.633509    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:09.646476    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:09.646488    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:09.660273    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:09.660283    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:09.694685    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:09.694701    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:09.733712    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:09.733723    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:09.745914    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:09.745926    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:09.757459    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:09.757471    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:09.779823    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:09.779832    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:09.793193    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:09.793204    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:09.812325    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:09.812334    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:09.824331    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:09.824341    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:09.836738    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:09.836749    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:09.850519    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:09.850529    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:09.887994    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:09.888002    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:12.394043    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:17.396208    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:17.396317    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:17.406550    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:17.406634    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:17.417018    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:17.417093    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:17.427827    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:17.427907    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:17.443644    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:17.443730    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:17.455945    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:17.456015    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:17.466710    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:17.466789    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:17.477624    9036 logs.go:276] 0 containers: []
	W0920 10:58:17.477634    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:17.477699    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:17.488165    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:17.488184    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:17.488189    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:17.527452    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:17.527463    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:17.541324    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:17.541340    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:17.552990    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:17.553002    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:17.563815    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:17.563827    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:17.601669    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:17.601686    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:17.606024    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:17.606032    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:17.620581    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:17.620592    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:17.637070    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:17.637087    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:17.649176    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:17.649191    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:17.662294    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:17.662305    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:17.697742    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:17.697752    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:17.712587    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:17.712599    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:17.723625    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:17.723639    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:17.735443    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:17.735453    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:17.754041    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:17.754051    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:17.767043    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:17.767056    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:20.292788    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:25.295476    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:25.295678    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:25.310149    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:25.310230    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:25.320863    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:25.320929    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:25.331231    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:25.331315    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:25.342360    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:25.342440    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:25.364135    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:25.364278    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:25.377380    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:25.377453    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:25.387518    9036 logs.go:276] 0 containers: []
	W0920 10:58:25.387530    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:25.387598    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:25.398043    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:25.398060    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:25.398065    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:25.416029    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:25.416040    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:25.429852    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:25.429866    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:25.441588    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:25.441599    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:25.479124    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:25.479137    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:25.490384    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:25.490395    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:25.508039    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:25.508049    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:25.545152    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:25.545163    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:25.559325    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:25.559334    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:25.580722    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:25.580738    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:25.595763    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:25.595772    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:25.607378    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:25.607388    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:25.630164    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:25.630172    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:25.634731    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:25.634739    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:25.668916    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:25.668925    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:25.681002    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:25.681016    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:25.693000    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:25.693009    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:28.206970    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:33.209279    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:33.209472    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:33.226447    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:33.226548    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:33.239251    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:33.239340    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:33.250374    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:33.250452    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:33.260984    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:33.261073    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:33.271368    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:33.271452    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:33.281988    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:33.282068    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:33.291691    9036 logs.go:276] 0 containers: []
	W0920 10:58:33.291703    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:33.291777    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:33.302288    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:33.302305    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:33.302311    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:33.314201    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:33.314212    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:33.326713    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:33.326726    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:33.331498    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:33.331505    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:33.343024    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:33.343036    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:33.356864    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:33.356873    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:33.368418    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:33.368429    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:33.381977    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:33.381987    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:33.430410    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:33.430422    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:33.488503    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:33.488517    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:33.511685    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:33.511696    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:33.550100    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:33.550109    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:33.573301    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:33.573315    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:33.588047    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:33.588057    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:33.599331    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:33.599341    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:33.610161    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:33.610171    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:33.624293    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:33.624304    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:36.143274    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:41.145543    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:41.145757    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:58:41.163262    9036 logs.go:276] 2 containers: [67063f9c0906 1619d098154d]
	I0920 10:58:41.163350    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:58:41.176699    9036 logs.go:276] 2 containers: [fa0d754c8b43 679ec37c5db9]
	I0920 10:58:41.176790    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:58:41.188362    9036 logs.go:276] 1 containers: [8be965661acc]
	I0920 10:58:41.188439    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:58:41.198791    9036 logs.go:276] 2 containers: [e9fdf453ea14 aceabc06111c]
	I0920 10:58:41.198875    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:58:41.209357    9036 logs.go:276] 1 containers: [3e88898ab872]
	I0920 10:58:41.209444    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:58:41.219599    9036 logs.go:276] 2 containers: [7ad974279fdd bbc78c4773e8]
	I0920 10:58:41.219682    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:58:41.230061    9036 logs.go:276] 0 containers: []
	W0920 10:58:41.230072    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:58:41.230137    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:58:41.240622    9036 logs.go:276] 2 containers: [8f959e4f7d55 2c04229858fd]
	I0920 10:58:41.240642    9036 logs.go:123] Gathering logs for kube-controller-manager [bbc78c4773e8] ...
	I0920 10:58:41.240647    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc78c4773e8"
	I0920 10:58:41.255854    9036 logs.go:123] Gathering logs for storage-provisioner [8f959e4f7d55] ...
	I0920 10:58:41.255864    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f959e4f7d55"
	I0920 10:58:41.267262    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:58:41.267272    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:58:41.304263    9036 logs.go:123] Gathering logs for kube-apiserver [67063f9c0906] ...
	I0920 10:58:41.304272    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67063f9c0906"
	I0920 10:58:41.320099    9036 logs.go:123] Gathering logs for kube-apiserver [1619d098154d] ...
	I0920 10:58:41.320110    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1619d098154d"
	I0920 10:58:41.357702    9036 logs.go:123] Gathering logs for kube-proxy [3e88898ab872] ...
	I0920 10:58:41.357715    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e88898ab872"
	I0920 10:58:41.373579    9036 logs.go:123] Gathering logs for kube-controller-manager [7ad974279fdd] ...
	I0920 10:58:41.373589    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad974279fdd"
	I0920 10:58:41.395071    9036 logs.go:123] Gathering logs for storage-provisioner [2c04229858fd] ...
	I0920 10:58:41.395082    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c04229858fd"
	I0920 10:58:41.406048    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:58:41.406060    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:58:41.427949    9036 logs.go:123] Gathering logs for etcd [679ec37c5db9] ...
	I0920 10:58:41.427957    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 679ec37c5db9"
	I0920 10:58:41.442591    9036 logs.go:123] Gathering logs for coredns [8be965661acc] ...
	I0920 10:58:41.442602    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8be965661acc"
	I0920 10:58:41.460021    9036 logs.go:123] Gathering logs for kube-scheduler [e9fdf453ea14] ...
	I0920 10:58:41.460034    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fdf453ea14"
	I0920 10:58:41.482353    9036 logs.go:123] Gathering logs for kube-scheduler [aceabc06111c] ...
	I0920 10:58:41.482365    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aceabc06111c"
	I0920 10:58:41.502750    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:58:41.502759    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:58:41.507301    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:58:41.507308    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:58:41.543895    9036 logs.go:123] Gathering logs for etcd [fa0d754c8b43] ...
	I0920 10:58:41.543907    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0d754c8b43"
	I0920 10:58:41.557530    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:58:41.557543    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:58:44.072901    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:49.075243    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:58:49.075323    9036 kubeadm.go:597] duration metric: took 4m4.124215667s to restartPrimaryControlPlane
	W0920 10:58:49.075377    9036 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:58:49.075404    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:58:50.084958    9036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.009548959s)
	I0920 10:58:50.085035    9036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:58:50.090019    9036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:58:50.093415    9036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:58:50.096068    9036 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:58:50.096073    9036 kubeadm.go:157] found existing configuration files:
	
	I0920 10:58:50.096097    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf
	I0920 10:58:50.098503    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:58:50.098527    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:58:50.101734    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf
	I0920 10:58:50.104665    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:58:50.104692    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:58:50.107195    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf
	I0920 10:58:50.110164    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:58:50.110188    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:58:50.113130    9036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf
	I0920 10:58:50.115740    9036 kubeadm.go:163] "https://control-plane.minikube.internal:51540" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51540 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:58:50.115765    9036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:58:50.118457    9036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:58:50.135588    9036 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:58:50.135716    9036 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:58:50.195147    9036 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:58:50.195198    9036 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:58:50.195247    9036 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:58:50.243493    9036 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:58:50.246813    9036 out.go:235]   - Generating certificates and keys ...
	I0920 10:58:50.246846    9036 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:58:50.246875    9036 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:58:50.246909    9036 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:58:50.246937    9036 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:58:50.246968    9036 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:58:50.246992    9036 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:58:50.247020    9036 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:58:50.247691    9036 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:58:50.247724    9036 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:58:50.247758    9036 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:58:50.247794    9036 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:58:50.247851    9036 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:58:50.318900    9036 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:58:50.405682    9036 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:58:50.445622    9036 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:58:50.480605    9036 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:58:50.510322    9036 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:58:50.510855    9036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:58:50.510946    9036 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:58:50.603392    9036 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:58:50.611402    9036 out.go:235]   - Booting up control plane ...
	I0920 10:58:50.611457    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:58:50.611500    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:58:50.611534    9036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:58:50.611609    9036 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:58:50.611697    9036 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:58:54.609471    9036 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001636 seconds
	I0920 10:58:54.609620    9036 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:58:54.613039    9036 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:58:55.132546    9036 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:58:55.132781    9036 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-423000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:58:55.636067    9036 kubeadm.go:310] [bootstrap-token] Using token: avlvxy.orzbh4xyhzrp3iig
	I0920 10:58:55.639145    9036 out.go:235]   - Configuring RBAC rules ...
	I0920 10:58:55.639207    9036 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:58:55.639255    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:58:55.644547    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:58:55.649609    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:58:55.650388    9036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:58:55.651223    9036 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:58:55.654566    9036 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:58:55.793148    9036 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:58:56.040236    9036 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:58:56.041110    9036 kubeadm.go:310] 
	I0920 10:58:56.041146    9036 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:58:56.041149    9036 kubeadm.go:310] 
	I0920 10:58:56.041196    9036 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:58:56.041201    9036 kubeadm.go:310] 
	I0920 10:58:56.041211    9036 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:58:56.041249    9036 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:58:56.041338    9036 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:58:56.041344    9036 kubeadm.go:310] 
	I0920 10:58:56.041390    9036 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:58:56.041395    9036 kubeadm.go:310] 
	I0920 10:58:56.041441    9036 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:58:56.041446    9036 kubeadm.go:310] 
	I0920 10:58:56.041494    9036 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:58:56.041536    9036 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:58:56.041587    9036 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:58:56.041594    9036 kubeadm.go:310] 
	I0920 10:58:56.041656    9036 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:58:56.041708    9036 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:58:56.041714    9036 kubeadm.go:310] 
	I0920 10:58:56.041812    9036 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token avlvxy.orzbh4xyhzrp3iig \
	I0920 10:58:56.041884    9036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa \
	I0920 10:58:56.041898    9036 kubeadm.go:310] 	--control-plane 
	I0920 10:58:56.041933    9036 kubeadm.go:310] 
	I0920 10:58:56.041987    9036 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:58:56.041995    9036 kubeadm.go:310] 
	I0920 10:58:56.042064    9036 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token avlvxy.orzbh4xyhzrp3iig \
	I0920 10:58:56.042150    9036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:129743b48d26fde689f5c5778f41c5956c7837483c32951bacd09a1e6883dcfa 
	I0920 10:58:56.042319    9036 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:58:56.042332    9036 cni.go:84] Creating CNI manager for ""
	I0920 10:58:56.042340    9036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:56.046141    9036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:58:56.054097    9036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:58:56.057252    9036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:58:56.064233    9036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:58:56.064344    9036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:58:56.064368    9036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-423000 minikube.k8s.io/updated_at=2024_09_20T10_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=stopped-upgrade-423000 minikube.k8s.io/primary=true
	I0920 10:58:56.108627    9036 ops.go:34] apiserver oom_adj: -16
	I0920 10:58:56.108645    9036 kubeadm.go:1113] duration metric: took 44.371458ms to wait for elevateKubeSystemPrivileges
	I0920 10:58:56.108654    9036 kubeadm.go:394] duration metric: took 4m11.17100575s to StartCluster
	I0920 10:58:56.108664    9036 settings.go:142] acquiring lock: {Name:mk5f352888690de611711a90a16fd3b08e6afbf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:56.108761    9036 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:58:56.109169    9036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/kubeconfig: {Name:mkec5cafcbf4b1660482b6f210de54829a52092a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:56.109393    9036 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:56.109420    9036 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:58:56.109516    9036 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:58:56.109529    9036 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-423000"
	I0920 10:58:56.109548    9036 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-423000"
	W0920 10:58:56.109559    9036 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:58:56.109565    9036 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-423000"
	I0920 10:58:56.109598    9036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-423000"
	I0920 10:58:56.109625    9036 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0920 10:58:56.110522    9036 retry.go:31] will retry after 975.687116ms: connect: dial unix /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/monitor: connect: connection refused
	I0920 10:58:56.115127    9036 out.go:177] * Verifying Kubernetes components...
	I0920 10:58:56.121123    9036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:58:56.124196    9036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:58:56.127169    9036 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:58:56.127178    9036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:58:56.127186    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:58:56.216688    9036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:58:56.223802    9036 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:58:56.223868    9036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:58:56.229304    9036 api_server.go:72] duration metric: took 119.896458ms to wait for apiserver process to appear ...
	I0920 10:58:56.229314    9036 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:58:56.229323    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:58:56.238827    9036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:58:57.089132    9036 kapi.go:59] client config for stopped-upgrade-423000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/stopped-upgrade-423000/client.key", CAFile:"/Users/jenkins/minikube-integration/19678-6679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:58:57.089264    9036 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-423000"
	W0920 10:58:57.089273    9036 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:58:57.089286    9036 host.go:66] Checking if "stopped-upgrade-423000" exists ...
	I0920 10:58:57.089905    9036 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:58:57.089911    9036 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:58:57.089917    9036 sshutil.go:53] new ssh client: &{IP:localhost Port:51506 SSHKeyPath:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/stopped-upgrade-423000/id_rsa Username:docker}
	I0920 10:58:57.120145    9036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:58:57.185558    9036 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:58:57.185573    9036 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:59:01.231375    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:01.231419    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:06.231695    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:06.231721    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:11.232036    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:11.232063    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:16.232508    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:16.232549    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:21.233267    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:21.233316    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:26.234235    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:26.234276    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:59:27.185746    9036 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:59:27.189978    9036 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:59:27.199909    9036 addons.go:510] duration metric: took 31.090657917s for enable addons: enabled=[storage-provisioner]
	I0920 10:59:31.235314    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:31.235355    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:36.236759    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:36.236822    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:41.238545    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:41.238598    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:46.239430    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:46.239472    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:51.241638    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:51.241655    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:59:56.241754    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:59:56.241887    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:59:56.260783    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 10:59:56.260871    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:59:56.271323    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 10:59:56.271412    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:59:56.281600    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 10:59:56.281676    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:59:56.291921    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 10:59:56.291997    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:59:56.302546    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 10:59:56.302622    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:59:56.312437    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 10:59:56.312503    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:59:56.322451    9036 logs.go:276] 0 containers: []
	W0920 10:59:56.322464    9036 logs.go:278] No container was found matching "kindnet"
	I0920 10:59:56.322543    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:59:56.332827    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 10:59:56.332842    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 10:59:56.332848    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:59:56.370262    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 10:59:56.370275    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:59:56.375205    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:59:56.375212    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:59:56.409754    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 10:59:56.409769    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 10:59:56.424483    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 10:59:56.424494    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 10:59:56.436119    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 10:59:56.436130    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 10:59:56.451355    9036 logs.go:123] Gathering logs for container status ...
	I0920 10:59:56.451365    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:59:56.463244    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 10:59:56.463257    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 10:59:56.477659    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 10:59:56.477669    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 10:59:56.488749    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 10:59:56.488763    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 10:59:56.500173    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 10:59:56.500184    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 10:59:56.518128    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 10:59:56.518139    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 10:59:56.529772    9036 logs.go:123] Gathering logs for Docker ...
	I0920 10:59:56.529782    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:59:59.054836    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:04.057337    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:04.057520    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:04.071798    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:04.071888    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:04.083487    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:04.083581    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:04.094541    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:04.094630    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:04.105203    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:04.105299    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:04.116225    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:04.116330    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:04.131109    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:04.131189    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:04.141140    9036 logs.go:276] 0 containers: []
	W0920 11:00:04.141152    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:04.141231    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:04.151930    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:04.151946    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:04.151953    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:04.187494    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:04.187503    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:04.192246    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:04.192254    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:04.206010    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:04.206021    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:04.221070    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:04.221079    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:04.240270    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:04.240282    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:04.258831    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:04.258842    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:04.292847    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:04.292857    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:04.306752    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:04.306760    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:04.318533    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:04.318545    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:04.332956    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:04.332966    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:04.351434    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:04.351446    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:04.377163    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:04.377174    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:06.892061    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:11.894345    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:11.894808    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:11.927533    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:11.927673    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:11.946484    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:11.946576    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:11.961633    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:11.961706    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:11.973312    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:11.973377    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:11.983549    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:11.983631    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:11.993520    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:11.993595    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:12.003032    9036 logs.go:276] 0 containers: []
	W0920 11:00:12.003043    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:12.003102    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:12.018754    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:12.018771    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:12.018776    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:12.030073    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:12.030083    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:12.064560    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:12.064567    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:12.102987    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:12.103003    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:12.114233    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:12.114246    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:12.131023    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:12.131033    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:12.145383    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:12.145392    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:12.156887    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:12.156897    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:12.180120    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:12.180129    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:12.191674    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:12.191690    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:12.197545    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:12.197557    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:12.211145    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:12.211156    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:12.224886    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:12.224895    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:14.737862    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:19.740173    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:19.740349    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:19.761569    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:19.761683    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:19.775019    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:19.775102    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:19.786307    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:19.786384    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:19.796411    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:19.796487    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:19.806891    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:19.806965    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:19.817431    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:19.817510    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:19.826968    9036 logs.go:276] 0 containers: []
	W0920 11:00:19.826977    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:19.827040    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:19.837170    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:19.837187    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:19.837192    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:19.870550    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:19.870560    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:19.874419    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:19.874424    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:19.909024    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:19.909035    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:19.923142    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:19.923153    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:19.937712    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:19.937720    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:19.961727    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:19.961735    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:19.973087    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:19.973100    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:19.987151    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:19.987165    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:19.998632    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:19.998642    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:20.009617    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:20.009629    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:20.021246    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:20.021260    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:20.038549    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:20.038559    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:22.552214    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:27.554733    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:27.555283    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:27.596950    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:27.597130    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:27.618588    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:27.618703    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:27.633705    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:27.633796    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:27.646224    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:27.646303    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:27.660530    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:27.660612    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:27.675938    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:27.676020    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:27.686000    9036 logs.go:276] 0 containers: []
	W0920 11:00:27.686012    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:27.686079    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:27.697578    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:27.697593    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:27.697598    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:27.711082    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:27.711093    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:27.722371    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:27.722384    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:27.733809    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:27.733818    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:27.757584    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:27.757592    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:27.768748    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:27.768761    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:27.803705    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:27.803712    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:27.843460    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:27.843473    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:27.857625    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:27.857635    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:27.869274    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:27.869285    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:27.891705    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:27.891714    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:27.896425    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:27.896434    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:27.907585    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:27.907596    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:30.423783    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:35.426557    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:35.427134    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:35.467091    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:35.467264    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:35.488220    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:35.488340    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:35.503316    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:35.503408    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:35.515399    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:35.515475    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:35.526010    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:35.526089    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:35.539305    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:35.539387    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:35.549581    9036 logs.go:276] 0 containers: []
	W0920 11:00:35.549593    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:35.549664    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:35.559735    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:35.559753    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:35.559759    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:35.578121    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:35.578133    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:35.591744    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:35.591753    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:35.610428    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:35.610439    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:35.621653    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:35.621665    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:35.656439    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:35.656454    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:35.671898    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:35.671909    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:35.687187    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:35.687198    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:35.698475    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:35.698488    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:35.722536    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:35.722543    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:35.734196    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:35.734209    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:35.769309    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:35.769317    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:35.773410    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:35.773417    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:38.288987    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:43.291732    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:43.292380    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:43.330203    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:43.330352    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:43.349583    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:43.349699    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:43.363785    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:43.363876    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:43.376074    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:43.376145    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:43.389058    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:43.389141    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:43.399728    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:43.399813    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:43.410061    9036 logs.go:276] 0 containers: []
	W0920 11:00:43.410072    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:43.410147    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:43.420617    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:43.420633    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:43.420639    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:43.454558    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:43.454569    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:43.468583    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:43.468592    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:43.480017    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:43.480027    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:43.491401    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:43.491411    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:43.503012    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:43.503023    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:43.519957    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:43.519967    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:43.531410    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:43.531422    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:43.536147    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:43.536152    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:43.573592    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:43.573602    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:43.591134    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:43.591145    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:43.607602    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:43.607612    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:43.623015    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:43.623023    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:46.149143    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:51.151596    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:51.152257    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:51.190281    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:51.190443    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:51.211589    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:51.211713    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:51.229841    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:51.229928    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:51.241808    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:51.241888    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:51.252899    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:51.252975    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:51.263848    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:51.263937    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:51.274602    9036 logs.go:276] 0 containers: []
	W0920 11:00:51.274614    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:51.274676    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:51.286910    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:51.286925    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:51.286931    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:51.301429    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:51.301438    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:51.319346    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:51.319359    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:00:51.334018    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:51.334028    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:51.348123    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:51.348137    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:51.370721    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:51.370734    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:51.405986    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:51.406000    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:51.410135    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:51.410143    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:51.443310    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:51.443324    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:51.458613    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:51.458623    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:51.470407    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:51.470416    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:51.486693    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:51.486709    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:51.510393    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:51.510403    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:54.023738    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:00:59.026528    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:00:59.026706    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:00:59.043875    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:00:59.043975    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:00:59.057035    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:00:59.057124    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:00:59.068376    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:00:59.068456    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:00:59.078875    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:00:59.078962    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:00:59.089511    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:00:59.089584    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:00:59.100327    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:00:59.100409    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:00:59.110435    9036 logs.go:276] 0 containers: []
	W0920 11:00:59.110445    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:00:59.110509    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:00:59.120797    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:00:59.120810    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:00:59.120815    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:00:59.139697    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:00:59.139710    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:00:59.150854    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:00:59.150866    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:00:59.173944    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:00:59.173950    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:00:59.208306    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:00:59.208318    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:00:59.222494    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:00:59.222505    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:00:59.236122    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:00:59.236131    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:00:59.247865    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:00:59.247876    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:00:59.259985    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:00:59.259997    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:00:59.272726    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:00:59.272735    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:00:59.307765    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:00:59.307772    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:00:59.311777    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:00:59.311784    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:00:59.322849    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:00:59.322862    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:01.839317    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:06.842059    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:06.842562    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:06.883549    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:06.883726    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:06.905272    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:06.905386    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:06.919942    9036 logs.go:276] 2 containers: [010c5980ca22 929cf2692faa]
	I0920 11:01:06.920046    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:06.934033    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:06.934112    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:06.944910    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:06.944996    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:06.955297    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:06.955368    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:06.965346    9036 logs.go:276] 0 containers: []
	W0920 11:01:06.965355    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:06.965417    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:06.975563    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:06.975579    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:06.975585    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:07.014131    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:07.014142    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:07.026113    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:07.026126    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:07.037668    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:07.037680    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:07.051845    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:07.051856    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:07.063708    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:07.063720    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:07.078006    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:07.078016    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:07.095225    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:07.095235    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:07.107335    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:07.107347    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:07.140826    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:07.140833    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:07.145044    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:07.145052    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:07.159449    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:07.159460    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:07.182056    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:07.182062    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:09.695540    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:14.697998    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:14.698262    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:14.716673    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:14.716768    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:14.730273    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:14.730362    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:14.741643    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:14.741725    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:14.751863    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:14.751930    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:14.762357    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:14.762435    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:14.773486    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:14.773560    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:14.783457    9036 logs.go:276] 0 containers: []
	W0920 11:01:14.783471    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:14.783531    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:14.795753    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:14.795770    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:14.795775    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:14.806189    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:14.806201    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:14.817615    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:14.817628    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:14.830937    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:14.830947    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:14.845393    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:14.845403    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:14.857298    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:14.857310    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:14.869087    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:14.869097    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:14.873351    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:14.873357    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:14.891137    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:14.891149    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:14.902517    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:14.902530    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:14.919913    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:14.919925    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:14.944013    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:14.944019    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:14.976904    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:14.976911    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:15.011170    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:15.011183    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:15.025430    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:15.025441    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:17.545791    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:22.548679    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:22.549239    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:22.587687    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:22.587851    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:22.610357    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:22.610471    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:22.626037    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:22.626134    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:22.639130    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:22.639205    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:22.649856    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:22.649937    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:22.664359    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:22.664438    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:22.674749    9036 logs.go:276] 0 containers: []
	W0920 11:01:22.674760    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:22.674831    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:22.685455    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:22.685473    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:22.685480    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:22.718932    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:22.718945    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:22.730703    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:22.730715    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:22.742040    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:22.742054    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:22.746196    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:22.746205    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:22.760298    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:22.760309    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:22.774382    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:22.774395    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:22.788301    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:22.788313    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:22.805296    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:22.805306    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:22.830118    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:22.830126    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:22.841498    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:22.841510    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:22.876268    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:22.876276    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:22.887873    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:22.887885    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:22.900714    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:22.900726    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:22.912514    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:22.912527    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:25.426816    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:30.427876    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:30.427937    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:30.439025    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:30.439090    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:30.450149    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:30.450228    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:30.463088    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:30.463157    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:30.474739    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:30.474809    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:30.485415    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:30.485480    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:30.496610    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:30.496693    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:30.509321    9036 logs.go:276] 0 containers: []
	W0920 11:01:30.509334    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:30.509394    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:30.520731    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:30.520748    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:30.520754    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:30.532564    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:30.532577    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:30.558775    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:30.558790    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:30.570859    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:30.570872    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:30.589298    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:30.589312    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:30.595188    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:30.595199    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:30.639170    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:30.639183    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:30.653923    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:30.653931    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:30.666834    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:30.666847    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:30.679467    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:30.679483    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:30.717258    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:30.717271    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:30.733104    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:30.733116    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:30.745709    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:30.745723    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:30.766690    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:30.766702    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:30.779338    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:30.779349    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:33.293572    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:38.294434    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:38.294581    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:38.316785    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:38.316908    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:38.332138    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:38.332220    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:38.345118    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:38.345200    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:38.363569    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:38.363638    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:38.375840    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:38.375930    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:38.387604    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:38.387690    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:38.399916    9036 logs.go:276] 0 containers: []
	W0920 11:01:38.399929    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:38.400002    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:38.412675    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:38.412694    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:38.412700    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:38.433842    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:38.433861    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:38.447840    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:38.447854    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:38.474511    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:38.474528    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:38.489235    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:38.489247    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:38.507386    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:38.507396    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:38.511872    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:38.511879    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:38.546282    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:38.546292    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:38.559363    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:38.559376    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:38.571737    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:38.571748    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:38.583724    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:38.583734    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:38.600474    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:38.600484    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:38.636732    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:38.636740    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:38.648486    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:38.648497    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:38.660313    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:38.660325    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:41.173717    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:46.176010    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:46.176378    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:46.211472    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:46.211662    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:46.245565    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:46.245673    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:46.261810    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:46.261909    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:46.273720    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:46.273801    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:46.283698    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:46.283770    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:46.293521    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:46.293588    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:46.304683    9036 logs.go:276] 0 containers: []
	W0920 11:01:46.304695    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:46.304766    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:46.315087    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:46.315103    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:46.315109    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:46.339535    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:46.339545    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:46.375436    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:46.375449    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:46.390710    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:46.390721    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:46.406367    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:46.406384    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:46.417732    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:46.417746    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:46.428958    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:46.428972    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:46.440707    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:46.440721    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:46.452392    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:46.452404    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:46.465298    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:46.465311    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:46.500724    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:46.500731    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:46.505117    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:46.505126    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:46.516288    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:46.516299    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:46.530569    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:46.530579    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:46.544919    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:46.544928    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:49.063849    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:01:54.066468    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:01:54.066539    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:01:54.078704    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:01:54.078770    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:01:54.091314    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:01:54.091386    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:01:54.102350    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:01:54.102428    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:01:54.113416    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:01:54.113493    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:01:54.124974    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:01:54.125067    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:01:54.136585    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:01:54.136648    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:01:54.148585    9036 logs.go:276] 0 containers: []
	W0920 11:01:54.148599    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:01:54.148681    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:01:54.160602    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:01:54.160622    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:01:54.160628    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:01:54.185919    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:01:54.185938    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:01:54.198996    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:01:54.199007    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:01:54.212063    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:01:54.212075    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:01:54.224348    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:01:54.224360    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:01:54.261483    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:01:54.261496    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:01:54.274998    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:01:54.275008    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:01:54.287681    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:01:54.287693    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:01:54.303564    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:01:54.303576    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:01:54.319709    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:01:54.319721    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:01:54.338528    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:01:54.338539    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:01:54.373551    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:01:54.373571    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:01:54.378277    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:01:54.378287    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:01:54.399286    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:01:54.399300    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:01:54.412529    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:01:54.412540    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:01:56.931079    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:01.933877    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:01.934186    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:01.962906    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:01.963056    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:01.981092    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:01.981195    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:01.994739    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:01.994828    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:02.009854    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:02.009922    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:02.020114    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:02.020195    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:02.031074    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:02.031159    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:02.041852    9036 logs.go:276] 0 containers: []
	W0920 11:02:02.041862    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:02.041928    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:02.051881    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:02.051901    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:02.051906    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:02.065604    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:02.065617    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:02.076814    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:02.076826    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:02.105533    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:02.105544    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:02.139537    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:02.139544    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:02.176277    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:02.176291    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:02.193759    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:02.193770    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:02.207291    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:02.207304    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:02.219959    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:02.219973    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:02.232686    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:02.232697    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:02.236775    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:02.236781    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:02.247976    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:02.247990    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:02.259098    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:02.259108    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:02.273173    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:02.273185    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:02.287527    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:02.287536    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:04.801340    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:09.803702    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:09.803964    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:09.823391    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:09.823507    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:09.838225    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:09.838313    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:09.855890    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:09.855994    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:09.866794    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:09.866877    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:09.876954    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:09.877040    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:09.887693    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:09.887772    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:09.897780    9036 logs.go:276] 0 containers: []
	W0920 11:02:09.897791    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:09.897863    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:09.908444    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:09.908466    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:09.908471    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:09.919678    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:09.919687    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:09.931416    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:09.931424    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:09.942867    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:09.942880    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:09.960056    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:09.960067    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:09.971829    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:09.971842    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:09.976333    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:09.976342    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:09.991597    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:09.991607    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:10.003647    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:10.003658    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:10.037748    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:10.037759    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:10.053661    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:10.053674    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:10.078652    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:10.078661    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:10.114556    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:10.114570    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:10.129746    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:10.129755    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:10.144459    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:10.144472    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:12.658278    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:17.659730    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:17.659933    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:17.680267    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:17.680377    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:17.694794    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:17.694894    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:17.706950    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:17.707046    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:17.719742    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:17.719818    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:17.731055    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:17.731149    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:17.742734    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:17.742824    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:17.753852    9036 logs.go:276] 0 containers: []
	W0920 11:02:17.753867    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:17.753936    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:17.765831    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:17.765852    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:17.765858    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:17.778557    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:17.778570    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:17.817919    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:17.817932    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:17.834131    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:17.834142    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:17.846816    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:17.846827    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:17.860081    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:17.860094    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:17.873011    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:17.873022    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:17.901611    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:17.901628    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:17.940076    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:17.940092    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:17.956141    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:17.956154    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:17.978831    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:17.978843    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:17.991948    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:17.991960    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:18.004988    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:18.004999    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:18.009674    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:18.009681    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:18.024181    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:18.024196    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:20.539464    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:25.542298    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:25.542924    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:25.582186    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:25.582350    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:25.610315    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:25.610435    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:25.624340    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:25.624433    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:25.636133    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:25.636211    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:25.646616    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:25.646697    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:25.657931    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:25.658015    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:25.669963    9036 logs.go:276] 0 containers: []
	W0920 11:02:25.669976    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:25.670044    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:25.681276    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:25.681292    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:25.681297    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:25.698018    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:25.698028    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:25.710111    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:25.710122    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:25.726144    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:25.726154    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:25.737634    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:25.737649    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:25.742145    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:25.742154    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:25.777129    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:25.777141    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:25.789222    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:25.789232    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:25.800344    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:25.800359    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:25.834480    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:25.834487    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:25.848193    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:25.848204    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:25.864043    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:25.864052    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:25.883997    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:25.884012    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:25.901582    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:25.901591    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:25.925982    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:25.925990    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:28.439491    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:33.442424    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:33.443034    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:33.484542    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:33.484697    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:33.508890    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:33.509026    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:33.523925    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:33.524022    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:33.536480    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:33.536564    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:33.547088    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:33.547168    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:33.557987    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:33.558064    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:33.568022    9036 logs.go:276] 0 containers: []
	W0920 11:02:33.568033    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:33.568091    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:33.582767    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:33.582784    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:33.582790    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:33.587136    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:33.587144    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:33.603496    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:33.603507    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:33.615299    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:33.615311    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:33.627763    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:33.627774    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:33.662230    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:33.662237    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:33.676653    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:33.676663    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:33.688073    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:33.688086    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:33.699413    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:33.699424    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:33.722458    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:33.722466    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:33.756141    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:33.756154    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:33.780371    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:33.780380    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:33.799119    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:33.799129    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:33.810713    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:33.810725    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:33.823005    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:33.823018    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:36.340068    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:41.341927    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:41.342376    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:41.387540    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:41.387712    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:41.406446    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:41.406567    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:41.421806    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:41.421898    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:41.434072    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:41.434144    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:41.444886    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:41.444960    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:41.455113    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:41.455202    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:41.465011    9036 logs.go:276] 0 containers: []
	W0920 11:02:41.465022    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:41.465095    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:41.476078    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:41.476097    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:41.476104    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:41.513095    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:41.513109    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:41.526547    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:41.526562    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:41.540075    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:41.540086    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:41.552545    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:41.552557    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:41.568200    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:41.568211    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:41.589670    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:41.589683    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:41.602669    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:41.602682    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:41.636290    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:41.636299    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:41.651907    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:41.651920    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:41.663600    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:41.663614    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:41.667804    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:41.667810    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:41.679273    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:41.679284    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:41.701731    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:41.701738    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:41.712851    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:41.712858    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:44.228885    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:49.230327    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:49.230904    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 11:02:49.270175    9036 logs.go:276] 1 containers: [031799fcb181]
	I0920 11:02:49.270322    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 11:02:49.291751    9036 logs.go:276] 1 containers: [45134914d5b8]
	I0920 11:02:49.291880    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 11:02:49.307133    9036 logs.go:276] 4 containers: [2e582de8a63c df0514325839 010c5980ca22 929cf2692faa]
	I0920 11:02:49.307229    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 11:02:49.323506    9036 logs.go:276] 1 containers: [8481bad0fa9d]
	I0920 11:02:49.323582    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 11:02:49.333892    9036 logs.go:276] 1 containers: [930f925a5831]
	I0920 11:02:49.333966    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 11:02:49.351424    9036 logs.go:276] 1 containers: [56ce63300b78]
	I0920 11:02:49.351499    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 11:02:49.361635    9036 logs.go:276] 0 containers: []
	W0920 11:02:49.361647    9036 logs.go:278] No container was found matching "kindnet"
	I0920 11:02:49.361703    9036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 11:02:49.371696    9036 logs.go:276] 1 containers: [7f2b8ff38723]
	I0920 11:02:49.371715    9036 logs.go:123] Gathering logs for kubelet ...
	I0920 11:02:49.371720    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 11:02:49.406433    9036 logs.go:123] Gathering logs for coredns [df0514325839] ...
	I0920 11:02:49.406442    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df0514325839"
	I0920 11:02:49.418302    9036 logs.go:123] Gathering logs for kube-scheduler [8481bad0fa9d] ...
	I0920 11:02:49.418315    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8481bad0fa9d"
	I0920 11:02:49.436583    9036 logs.go:123] Gathering logs for storage-provisioner [7f2b8ff38723] ...
	I0920 11:02:49.436596    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2b8ff38723"
	I0920 11:02:49.448725    9036 logs.go:123] Gathering logs for coredns [929cf2692faa] ...
	I0920 11:02:49.448737    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929cf2692faa"
	I0920 11:02:49.460245    9036 logs.go:123] Gathering logs for container status ...
	I0920 11:02:49.460257    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 11:02:49.475638    9036 logs.go:123] Gathering logs for dmesg ...
	I0920 11:02:49.475653    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 11:02:49.479785    9036 logs.go:123] Gathering logs for kube-apiserver [031799fcb181] ...
	I0920 11:02:49.479794    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 031799fcb181"
	I0920 11:02:49.493690    9036 logs.go:123] Gathering logs for etcd [45134914d5b8] ...
	I0920 11:02:49.493701    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45134914d5b8"
	I0920 11:02:49.507313    9036 logs.go:123] Gathering logs for coredns [2e582de8a63c] ...
	I0920 11:02:49.507323    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e582de8a63c"
	I0920 11:02:49.518977    9036 logs.go:123] Gathering logs for coredns [010c5980ca22] ...
	I0920 11:02:49.518990    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010c5980ca22"
	I0920 11:02:49.531179    9036 logs.go:123] Gathering logs for describe nodes ...
	I0920 11:02:49.531192    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 11:02:49.567973    9036 logs.go:123] Gathering logs for kube-proxy [930f925a5831] ...
	I0920 11:02:49.567984    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 930f925a5831"
	I0920 11:02:49.579864    9036 logs.go:123] Gathering logs for kube-controller-manager [56ce63300b78] ...
	I0920 11:02:49.579874    9036 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ce63300b78"
	I0920 11:02:49.598489    9036 logs.go:123] Gathering logs for Docker ...
	I0920 11:02:49.598499    9036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 11:02:52.124496    9036 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 11:02:57.127361    9036 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 11:02:57.140659    9036 out.go:201] 
	W0920 11:02:57.142707    9036 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 11:02:57.142737    9036 out.go:270] * 
	* 
	W0920 11:02:57.145321    9036 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:57.167538    9036 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-423000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.97s)

                                                
                                    
x
+
TestPause/serial/Start (9.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-094000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-094000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.847918041s)

                                                
                                                
-- stdout --
	* [pause-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-094000" primary control-plane node in "pause-094000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-094000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-094000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-094000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-094000 -n pause-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-094000 -n pause-094000: exit status 7 (65.323583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 : exit status 80 (9.90351475s)

                                                
                                                
-- stdout --
	* [NoKubernetes-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-486000" primary control-plane node in "NoKubernetes-486000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000: exit status 7 (71.940833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245367959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-486000
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000: exit status 7 (34.632416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250060208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-486000
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000: exit status 7 (62.191084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 : exit status 80 (5.292492833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-486000
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-486000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-486000 -n NoKubernetes-486000: exit status 7 (64.047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.120005916s)

                                                
                                                
-- stdout --
	* [auto-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-189000" primary control-plane node in "auto-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:01:18.475914    9663 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:01:18.476052    9663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:18.476055    9663 out.go:358] Setting ErrFile to fd 2...
	I0920 11:01:18.476058    9663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:18.476212    9663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:01:18.477273    9663 out.go:352] Setting JSON to false
	I0920 11:01:18.493648    9663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5441,"bootTime":1726849837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:01:18.493727    9663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:01:18.500572    9663 out.go:177] * [auto-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:01:18.508541    9663 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:01:18.508570    9663 notify.go:220] Checking for updates...
	I0920 11:01:18.516415    9663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:01:18.519453    9663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:01:18.522422    9663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:01:18.525410    9663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:01:18.528425    9663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:01:18.531794    9663 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:01:18.531860    9663 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:01:18.531906    9663 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:01:18.536429    9663 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:01:18.543359    9663 start.go:297] selected driver: qemu2
	I0920 11:01:18.543364    9663 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:01:18.543370    9663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:01:18.545659    9663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:01:18.549419    9663 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:01:18.552511    9663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:01:18.552529    9663 cni.go:84] Creating CNI manager for ""
	I0920 11:01:18.552558    9663 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:01:18.552572    9663 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:01:18.552596    9663 start.go:340] cluster config:
	{Name:auto-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:01:18.556225    9663 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:01:18.564440    9663 out.go:177] * Starting "auto-189000" primary control-plane node in "auto-189000" cluster
	I0920 11:01:18.568218    9663 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:01:18.568232    9663 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:01:18.568237    9663 cache.go:56] Caching tarball of preloaded images
	I0920 11:01:18.568298    9663 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:01:18.568304    9663 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:01:18.568362    9663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/auto-189000/config.json ...
	I0920 11:01:18.568374    9663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/auto-189000/config.json: {Name:mke96a289db2b89f2d98cd928a470bcf040239bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:01:18.568686    9663 start.go:360] acquireMachinesLock for auto-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:18.568720    9663 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "auto-189000"
	I0920 11:01:18.568733    9663 start.go:93] Provisioning new machine with config: &{Name:auto-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:18.568758    9663 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:18.575421    9663 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:18.592070    9663 start.go:159] libmachine.API.Create for "auto-189000" (driver="qemu2")
	I0920 11:01:18.592108    9663 client.go:168] LocalClient.Create starting
	I0920 11:01:18.592181    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:18.592216    9663 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:18.592224    9663 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:18.592267    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:18.592290    9663 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:18.592300    9663 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:18.592724    9663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:18.759807    9663 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:18.874024    9663 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:18.874030    9663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:18.874257    9663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:18.883932    9663 main.go:141] libmachine: STDOUT: 
	I0920 11:01:18.883949    9663 main.go:141] libmachine: STDERR: 
	I0920 11:01:18.884003    9663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2 +20000M
	I0920 11:01:18.891949    9663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:18.891965    9663 main.go:141] libmachine: STDERR: 
	I0920 11:01:18.891981    9663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:18.891987    9663 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:18.892013    9663 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:18.892052    9663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:ca:fb:fd:a4:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:18.893755    9663 main.go:141] libmachine: STDOUT: 
	I0920 11:01:18.893771    9663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:18.893798    9663 client.go:171] duration metric: took 301.683333ms to LocalClient.Create
	I0920 11:01:20.895999    9663 start.go:128] duration metric: took 2.327219666s to createHost
	I0920 11:01:20.896099    9663 start.go:83] releasing machines lock for "auto-189000", held for 2.327380125s
	W0920 11:01:20.896180    9663 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:20.905868    9663 out.go:177] * Deleting "auto-189000" in qemu2 ...
	W0920 11:01:20.949132    9663 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:20.949172    9663 start.go:729] Will try again in 5 seconds ...
	I0920 11:01:25.951392    9663 start.go:360] acquireMachinesLock for auto-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:25.951856    9663 start.go:364] duration metric: took 355.333µs to acquireMachinesLock for "auto-189000"
	I0920 11:01:25.951981    9663 start.go:93] Provisioning new machine with config: &{Name:auto-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:25.952182    9663 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:25.959745    9663 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:25.998887    9663 start.go:159] libmachine.API.Create for "auto-189000" (driver="qemu2")
	I0920 11:01:25.998945    9663 client.go:168] LocalClient.Create starting
	I0920 11:01:25.999052    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:25.999109    9663 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:25.999130    9663 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:25.999188    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:25.999230    9663 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:25.999241    9663 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:25.999883    9663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:26.173001    9663 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:26.499099    9663 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:26.499115    9663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:26.499378    9663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:26.509330    9663 main.go:141] libmachine: STDOUT: 
	I0920 11:01:26.509344    9663 main.go:141] libmachine: STDERR: 
	I0920 11:01:26.509410    9663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2 +20000M
	I0920 11:01:26.517555    9663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:26.517569    9663 main.go:141] libmachine: STDERR: 
	I0920 11:01:26.517582    9663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:26.517589    9663 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:26.517597    9663 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:26.517632    9663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:33:92:5b:07:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/auto-189000/disk.qcow2
	I0920 11:01:26.519380    9663 main.go:141] libmachine: STDOUT: 
	I0920 11:01:26.519394    9663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:26.519406    9663 client.go:171] duration metric: took 520.458792ms to LocalClient.Create
	I0920 11:01:28.520398    9663 start.go:128] duration metric: took 2.5681925s to createHost
	I0920 11:01:28.520481    9663 start.go:83] releasing machines lock for "auto-189000", held for 2.568620625s
	W0920 11:01:28.520878    9663 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:28.530485    9663 out.go:201] 
	W0920 11:01:28.542583    9663 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:01:28.542621    9663 out.go:270] * 
	* 
	W0920 11:01:28.545115    9663 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:01:28.553519    9663 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.044346s)

                                                
                                                
-- stdout --
	* [kindnet-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-189000" primary control-plane node in "kindnet-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:01:30.847668    9777 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:01:30.847817    9777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:30.847820    9777 out.go:358] Setting ErrFile to fd 2...
	I0920 11:01:30.847823    9777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:30.847967    9777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:01:30.849087    9777 out.go:352] Setting JSON to false
	I0920 11:01:30.865443    9777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5453,"bootTime":1726849837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:01:30.865515    9777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:01:30.871955    9777 out.go:177] * [kindnet-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:01:30.880698    9777 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:01:30.880750    9777 notify.go:220] Checking for updates...
	I0920 11:01:30.888771    9777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:01:30.891740    9777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:01:30.894760    9777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:01:30.897769    9777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:01:30.900696    9777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:01:30.904130    9777 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:01:30.904197    9777 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:01:30.904247    9777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:01:30.908717    9777 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:01:30.915789    9777 start.go:297] selected driver: qemu2
	I0920 11:01:30.915799    9777 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:01:30.915806    9777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:01:30.918295    9777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:01:30.920962    9777 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:01:30.923815    9777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:01:30.923833    9777 cni.go:84] Creating CNI manager for "kindnet"
	I0920 11:01:30.923837    9777 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 11:01:30.923874    9777 start.go:340] cluster config:
	{Name:kindnet-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:01:30.927713    9777 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:01:30.935750    9777 out.go:177] * Starting "kindnet-189000" primary control-plane node in "kindnet-189000" cluster
	I0920 11:01:30.939724    9777 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:01:30.939737    9777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:01:30.939744    9777 cache.go:56] Caching tarball of preloaded images
	I0920 11:01:30.939798    9777 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:01:30.939804    9777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:01:30.939866    9777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kindnet-189000/config.json ...
	I0920 11:01:30.939877    9777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kindnet-189000/config.json: {Name:mk50e5a13d07d037de9d734b92a7776e3322c901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:01:30.940102    9777 start.go:360] acquireMachinesLock for kindnet-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:30.940137    9777 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "kindnet-189000"
	I0920 11:01:30.940149    9777 start.go:93] Provisioning new machine with config: &{Name:kindnet-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:30.940175    9777 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:30.948808    9777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:30.965169    9777 start.go:159] libmachine.API.Create for "kindnet-189000" (driver="qemu2")
	I0920 11:01:30.965198    9777 client.go:168] LocalClient.Create starting
	I0920 11:01:30.965261    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:30.965290    9777 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:30.965300    9777 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:30.965335    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:30.965358    9777 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:30.965367    9777 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:30.965724    9777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:31.132579    9777 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:31.172910    9777 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:31.172920    9777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:31.173115    9777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:31.182365    9777 main.go:141] libmachine: STDOUT: 
	I0920 11:01:31.182382    9777 main.go:141] libmachine: STDERR: 
	I0920 11:01:31.182445    9777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2 +20000M
	I0920 11:01:31.190348    9777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:31.190364    9777 main.go:141] libmachine: STDERR: 
	I0920 11:01:31.190379    9777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:31.190382    9777 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:31.190393    9777 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:31.190421    9777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:16:84:c1:bf:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:31.192034    9777 main.go:141] libmachine: STDOUT: 
	I0920 11:01:31.192062    9777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:31.192085    9777 client.go:171] duration metric: took 226.880167ms to LocalClient.Create
	I0920 11:01:33.194281    9777 start.go:128] duration metric: took 2.2540895s to createHost
	I0920 11:01:33.194383    9777 start.go:83] releasing machines lock for "kindnet-189000", held for 2.254248667s
	W0920 11:01:33.194451    9777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:33.206832    9777 out.go:177] * Deleting "kindnet-189000" in qemu2 ...
	W0920 11:01:33.234683    9777 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:33.234709    9777 start.go:729] Will try again in 5 seconds ...
	I0920 11:01:38.236941    9777 start.go:360] acquireMachinesLock for kindnet-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:38.237502    9777 start.go:364] duration metric: took 459.625µs to acquireMachinesLock for "kindnet-189000"
	I0920 11:01:38.237642    9777 start.go:93] Provisioning new machine with config: &{Name:kindnet-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:38.237896    9777 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:38.249542    9777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:38.299518    9777 start.go:159] libmachine.API.Create for "kindnet-189000" (driver="qemu2")
	I0920 11:01:38.299577    9777 client.go:168] LocalClient.Create starting
	I0920 11:01:38.299684    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:38.299753    9777 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:38.299772    9777 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:38.299860    9777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:38.299904    9777 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:38.299915    9777 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:38.300530    9777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:38.477579    9777 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:38.791627    9777 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:38.791639    9777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:38.791892    9777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:38.801733    9777 main.go:141] libmachine: STDOUT: 
	I0920 11:01:38.801753    9777 main.go:141] libmachine: STDERR: 
	I0920 11:01:38.801815    9777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2 +20000M
	I0920 11:01:38.809885    9777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:38.809902    9777 main.go:141] libmachine: STDERR: 
	I0920 11:01:38.809913    9777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:38.809918    9777 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:38.809934    9777 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:38.809961    9777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:37:b6:36:63:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kindnet-189000/disk.qcow2
	I0920 11:01:38.811755    9777 main.go:141] libmachine: STDOUT: 
	I0920 11:01:38.811771    9777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:38.811783    9777 client.go:171] duration metric: took 512.203917ms to LocalClient.Create
	I0920 11:01:40.813972    9777 start.go:128] duration metric: took 2.576047416s to createHost
	I0920 11:01:40.814087    9777 start.go:83] releasing machines lock for "kindnet-189000", held for 2.576572125s
	W0920 11:01:40.814483    9777 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:40.833256    9777 out.go:201] 
	W0920 11:01:40.837168    9777 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:01:40.837192    9777 out.go:270] * 
	* 
	W0920 11:01:40.839521    9777 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:01:40.850190    9777 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.975288209s)

                                                
                                                
-- stdout --
	* [calico-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-189000" primary control-plane node in "calico-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:01:43.186310    9896 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:01:43.186433    9896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:43.186437    9896 out.go:358] Setting ErrFile to fd 2...
	I0920 11:01:43.186439    9896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:43.186577    9896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:01:43.187654    9896 out.go:352] Setting JSON to false
	I0920 11:01:43.204001    9896 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5466,"bootTime":1726849837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:01:43.204077    9896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:01:43.211210    9896 out.go:177] * [calico-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:01:43.219089    9896 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:01:43.219173    9896 notify.go:220] Checking for updates...
	I0920 11:01:43.226195    9896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:01:43.229192    9896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:01:43.232163    9896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:01:43.235165    9896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:01:43.238032    9896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:01:43.241520    9896 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:01:43.241595    9896 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:01:43.241642    9896 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:01:43.246184    9896 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:01:43.253202    9896 start.go:297] selected driver: qemu2
	I0920 11:01:43.253210    9896 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:01:43.253218    9896 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:01:43.255428    9896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:01:43.259190    9896 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:01:43.260746    9896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:01:43.260780    9896 cni.go:84] Creating CNI manager for "calico"
	I0920 11:01:43.260784    9896 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0920 11:01:43.260822    9896 start.go:340] cluster config:
	{Name:calico-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:01:43.264495    9896 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:01:43.273175    9896 out.go:177] * Starting "calico-189000" primary control-plane node in "calico-189000" cluster
	I0920 11:01:43.277133    9896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:01:43.277151    9896 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:01:43.277160    9896 cache.go:56] Caching tarball of preloaded images
	I0920 11:01:43.277233    9896 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:01:43.277239    9896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:01:43.277300    9896 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/calico-189000/config.json ...
	I0920 11:01:43.277310    9896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/calico-189000/config.json: {Name:mk53c00d2fec6c2f84bdc18ffc9fb3a247649691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:01:43.277530    9896 start.go:360] acquireMachinesLock for calico-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:43.277563    9896 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "calico-189000"
	I0920 11:01:43.277575    9896 start.go:93] Provisioning new machine with config: &{Name:calico-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:43.277603    9896 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:43.286090    9896 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:43.303796    9896 start.go:159] libmachine.API.Create for "calico-189000" (driver="qemu2")
	I0920 11:01:43.303831    9896 client.go:168] LocalClient.Create starting
	I0920 11:01:43.303890    9896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:43.303923    9896 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:43.303933    9896 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:43.303972    9896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:43.303994    9896 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:43.304001    9896 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:43.304348    9896 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:43.466717    9896 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:43.511733    9896 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:43.511738    9896 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:43.511926    9896 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:43.521120    9896 main.go:141] libmachine: STDOUT: 
	I0920 11:01:43.521139    9896 main.go:141] libmachine: STDERR: 
	I0920 11:01:43.521206    9896 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2 +20000M
	I0920 11:01:43.528932    9896 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:43.528947    9896 main.go:141] libmachine: STDERR: 
	I0920 11:01:43.528971    9896 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:43.528979    9896 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:43.528993    9896 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:43.529021    9896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:12:87:42:2b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:43.530575    9896 main.go:141] libmachine: STDOUT: 
	I0920 11:01:43.530589    9896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:43.530608    9896 client.go:171] duration metric: took 226.772542ms to LocalClient.Create
	I0920 11:01:45.532519    9896 start.go:128] duration metric: took 2.254900167s to createHost
	I0920 11:01:45.532596    9896 start.go:83] releasing machines lock for "calico-189000", held for 2.255037667s
	W0920 11:01:45.532646    9896 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:45.546255    9896 out.go:177] * Deleting "calico-189000" in qemu2 ...
	W0920 11:01:45.576518    9896 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:45.576542    9896 start.go:729] Will try again in 5 seconds ...
	I0920 11:01:50.578739    9896 start.go:360] acquireMachinesLock for calico-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:50.578989    9896 start.go:364] duration metric: took 201.375µs to acquireMachinesLock for "calico-189000"
	I0920 11:01:50.579027    9896 start.go:93] Provisioning new machine with config: &{Name:calico-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:50.579142    9896 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:50.588443    9896 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:50.617766    9896 start.go:159] libmachine.API.Create for "calico-189000" (driver="qemu2")
	I0920 11:01:50.617804    9896 client.go:168] LocalClient.Create starting
	I0920 11:01:50.617893    9896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:50.617949    9896 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:50.617963    9896 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:50.618021    9896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:50.618056    9896 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:50.618068    9896 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:50.618503    9896 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:50.795644    9896 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:51.063023    9896 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:51.063035    9896 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:51.063250    9896 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:51.072856    9896 main.go:141] libmachine: STDOUT: 
	I0920 11:01:51.072879    9896 main.go:141] libmachine: STDERR: 
	I0920 11:01:51.072948    9896 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2 +20000M
	I0920 11:01:51.080919    9896 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:51.080936    9896 main.go:141] libmachine: STDERR: 
	I0920 11:01:51.080950    9896 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:51.080956    9896 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:51.080966    9896 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:51.081010    9896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b9:11:8c:f3:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/calico-189000/disk.qcow2
	I0920 11:01:51.082656    9896 main.go:141] libmachine: STDOUT: 
	I0920 11:01:51.082669    9896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:51.082682    9896 client.go:171] duration metric: took 464.876ms to LocalClient.Create
	I0920 11:01:53.084887    9896 start.go:128] duration metric: took 2.5057195s to createHost
	I0920 11:01:53.084990    9896 start.go:83] releasing machines lock for "calico-189000", held for 2.505998334s
	W0920 11:01:53.085372    9896 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:53.100015    9896 out.go:201] 
	W0920 11:01:53.106092    9896 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:01:53.106119    9896 out.go:270] * 
	* 
	W0920 11:01:53.107979    9896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:01:53.118918    9896 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.9189155s)

                                                
                                                
-- stdout --
	* [custom-flannel-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-189000" primary control-plane node in "custom-flannel-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:01:55.590594   10020 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:01:55.590711   10020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:55.590714   10020 out.go:358] Setting ErrFile to fd 2...
	I0920 11:01:55.590716   10020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:01:55.590873   10020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:01:55.592034   10020 out.go:352] Setting JSON to false
	I0920 11:01:55.609556   10020 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5478,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:01:55.609631   10020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:01:55.617496   10020 out.go:177] * [custom-flannel-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:01:55.625335   10020 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:01:55.625383   10020 notify.go:220] Checking for updates...
	I0920 11:01:55.633304   10020 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:01:55.636377   10020 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:01:55.639338   10020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:01:55.642209   10020 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:01:55.649369   10020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:01:55.653825   10020 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:01:55.653892   10020 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:01:55.653939   10020 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:01:55.658281   10020 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:01:55.666277   10020 start.go:297] selected driver: qemu2
	I0920 11:01:55.666284   10020 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:01:55.666290   10020 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:01:55.668461   10020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:01:55.672261   10020 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:01:55.675285   10020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:01:55.675302   10020 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0920 11:01:55.675310   10020 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0920 11:01:55.675345   10020 start.go:340] cluster config:
	{Name:custom-flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:01:55.678752   10020 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:01:55.686365   10020 out.go:177] * Starting "custom-flannel-189000" primary control-plane node in "custom-flannel-189000" cluster
	I0920 11:01:55.690264   10020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:01:55.690279   10020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:01:55.690287   10020 cache.go:56] Caching tarball of preloaded images
	I0920 11:01:55.690339   10020 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:01:55.690345   10020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:01:55.690396   10020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/custom-flannel-189000/config.json ...
	I0920 11:01:55.690406   10020 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/custom-flannel-189000/config.json: {Name:mk3b2bd941d33c39a80341333ca4e6b8b5a43863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:01:55.690696   10020 start.go:360] acquireMachinesLock for custom-flannel-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:01:55.690732   10020 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "custom-flannel-189000"
	I0920 11:01:55.690744   10020 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:01:55.690776   10020 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:01:55.699364   10020 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:01:55.714606   10020 start.go:159] libmachine.API.Create for "custom-flannel-189000" (driver="qemu2")
	I0920 11:01:55.714631   10020 client.go:168] LocalClient.Create starting
	I0920 11:01:55.714699   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:01:55.714746   10020 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:55.714754   10020 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:55.714790   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:01:55.714812   10020 main.go:141] libmachine: Decoding PEM data...
	I0920 11:01:55.714817   10020 main.go:141] libmachine: Parsing certificate...
	I0920 11:01:55.715186   10020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:01:55.882685   10020 main.go:141] libmachine: Creating SSH key...
	I0920 11:01:56.022637   10020 main.go:141] libmachine: Creating Disk image...
	I0920 11:01:56.022648   10020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:01:56.022874   10020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:01:56.033090   10020 main.go:141] libmachine: STDOUT: 
	I0920 11:01:56.033109   10020 main.go:141] libmachine: STDERR: 
	I0920 11:01:56.033189   10020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2 +20000M
	I0920 11:01:56.041435   10020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:01:56.041448   10020 main.go:141] libmachine: STDERR: 
	I0920 11:01:56.041464   10020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:01:56.041472   10020 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:01:56.041483   10020 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:01:56.041510   10020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c0:00:c1:29:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:01:56.043228   10020 main.go:141] libmachine: STDOUT: 
	I0920 11:01:56.043243   10020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:01:56.043265   10020 client.go:171] duration metric: took 328.629542ms to LocalClient.Create
	I0920 11:01:58.045418   10020 start.go:128] duration metric: took 2.35463375s to createHost
	I0920 11:01:58.045515   10020 start.go:83] releasing machines lock for "custom-flannel-189000", held for 2.354788s
	W0920 11:01:58.045577   10020 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:58.058409   10020 out.go:177] * Deleting "custom-flannel-189000" in qemu2 ...
	W0920 11:01:58.084447   10020 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:01:58.084463   10020 start.go:729] Will try again in 5 seconds ...
	I0920 11:02:03.086702   10020 start.go:360] acquireMachinesLock for custom-flannel-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:03.087279   10020 start.go:364] duration metric: took 464.083µs to acquireMachinesLock for "custom-flannel-189000"
	I0920 11:02:03.087353   10020 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:03.087694   10020 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:03.094406   10020 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:03.139475   10020 start.go:159] libmachine.API.Create for "custom-flannel-189000" (driver="qemu2")
	I0920 11:02:03.139522   10020 client.go:168] LocalClient.Create starting
	I0920 11:02:03.139655   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:03.139728   10020 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:03.139742   10020 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:03.139800   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:03.139840   10020 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:03.139851   10020 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:03.140578   10020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:03.315436   10020 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:03.401077   10020 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:03.401086   10020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:03.401270   10020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:02:03.410642   10020 main.go:141] libmachine: STDOUT: 
	I0920 11:02:03.410660   10020 main.go:141] libmachine: STDERR: 
	I0920 11:02:03.410728   10020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2 +20000M
	I0920 11:02:03.419051   10020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:03.419066   10020 main.go:141] libmachine: STDERR: 
	I0920 11:02:03.419080   10020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:02:03.419084   10020 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:03.419099   10020 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:03.419131   10020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:c1:35:f6:8c:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/custom-flannel-189000/disk.qcow2
	I0920 11:02:03.420889   10020 main.go:141] libmachine: STDOUT: 
	I0920 11:02:03.420902   10020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:03.420915   10020 client.go:171] duration metric: took 281.388291ms to LocalClient.Create
	I0920 11:02:05.423128   10020 start.go:128] duration metric: took 2.335382709s to createHost
	I0920 11:02:05.423222   10020 start.go:83] releasing machines lock for "custom-flannel-189000", held for 2.335929667s
	W0920 11:02:05.423638   10020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:05.439498   10020 out.go:201] 
	W0920 11:02:05.444554   10020 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:02:05.444580   10020 out.go:270] * 
	* 
	W0920 11:02:05.446862   10020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:05.465470   10020 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.780822917s)

                                                
                                                
-- stdout --
	* [false-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-189000" primary control-plane node in "false-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:02:07.882521   10144 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:02:07.882949   10144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:07.882954   10144 out.go:358] Setting ErrFile to fd 2...
	I0920 11:02:07.882956   10144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:07.883144   10144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:02:07.884639   10144 out.go:352] Setting JSON to false
	I0920 11:02:07.901786   10144 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5490,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:02:07.901884   10144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:02:07.909234   10144 out.go:177] * [false-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:02:07.916301   10144 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:02:07.916343   10144 notify.go:220] Checking for updates...
	I0920 11:02:07.927285   10144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:02:07.930225   10144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:02:07.933277   10144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:02:07.936286   10144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:02:07.939303   10144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:02:07.942710   10144 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:02:07.942782   10144 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:02:07.942821   10144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:02:07.947222   10144 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:02:07.954224   10144 start.go:297] selected driver: qemu2
	I0920 11:02:07.954230   10144 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:02:07.954236   10144 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:02:07.956404   10144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:02:07.959286   10144 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:02:07.960562   10144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:02:07.960589   10144 cni.go:84] Creating CNI manager for "false"
	I0920 11:02:07.960628   10144 start.go:340] cluster config:
	{Name:false-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:02:07.964073   10144 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:02:07.972255   10144 out.go:177] * Starting "false-189000" primary control-plane node in "false-189000" cluster
	I0920 11:02:07.976242   10144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:02:07.976255   10144 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:02:07.976263   10144 cache.go:56] Caching tarball of preloaded images
	I0920 11:02:07.976314   10144 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:02:07.976319   10144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:02:07.976378   10144 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/false-189000/config.json ...
	I0920 11:02:07.976388   10144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/false-189000/config.json: {Name:mk4084795b29273af16135a5c010427c1de85d23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:02:07.976792   10144 start.go:360] acquireMachinesLock for false-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:07.976821   10144 start.go:364] duration metric: took 24.208µs to acquireMachinesLock for "false-189000"
	I0920 11:02:07.976832   10144 start.go:93] Provisioning new machine with config: &{Name:false-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:07.976860   10144 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:07.981266   10144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:07.996595   10144 start.go:159] libmachine.API.Create for "false-189000" (driver="qemu2")
	I0920 11:02:07.996620   10144 client.go:168] LocalClient.Create starting
	I0920 11:02:07.996686   10144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:07.996716   10144 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:07.996726   10144 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:07.996766   10144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:07.996788   10144 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:07.996797   10144 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:07.997263   10144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:08.162418   10144 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:08.238563   10144 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:08.238572   10144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:08.238770   10144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:08.247835   10144 main.go:141] libmachine: STDOUT: 
	I0920 11:02:08.247857   10144 main.go:141] libmachine: STDERR: 
	I0920 11:02:08.247905   10144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2 +20000M
	I0920 11:02:08.255935   10144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:08.255958   10144 main.go:141] libmachine: STDERR: 
	I0920 11:02:08.255971   10144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:08.255976   10144 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:08.255984   10144 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:08.256012   10144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:fe:6a:10:91:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:08.257724   10144 main.go:141] libmachine: STDOUT: 
	I0920 11:02:08.257737   10144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:08.257773   10144 client.go:171] duration metric: took 261.149125ms to LocalClient.Create
	I0920 11:02:10.259838   10144 start.go:128] duration metric: took 2.282983709s to createHost
	I0920 11:02:10.259861   10144 start.go:83] releasing machines lock for "false-189000", held for 2.283045166s
	W0920 11:02:10.259879   10144 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:10.269738   10144 out.go:177] * Deleting "false-189000" in qemu2 ...
	W0920 11:02:10.292873   10144 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:10.292882   10144 start.go:729] Will try again in 5 seconds ...
	I0920 11:02:15.295150   10144 start.go:360] acquireMachinesLock for false-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:15.295794   10144 start.go:364] duration metric: took 517.167µs to acquireMachinesLock for "false-189000"
	I0920 11:02:15.295918   10144 start.go:93] Provisioning new machine with config: &{Name:false-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:15.296282   10144 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:15.302032   10144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:15.353240   10144 start.go:159] libmachine.API.Create for "false-189000" (driver="qemu2")
	I0920 11:02:15.353289   10144 client.go:168] LocalClient.Create starting
	I0920 11:02:15.353412   10144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:15.353482   10144 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:15.353500   10144 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:15.353565   10144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:15.353609   10144 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:15.353621   10144 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:15.354211   10144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:15.530883   10144 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:15.570533   10144 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:15.570539   10144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:15.570739   10144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:15.579806   10144 main.go:141] libmachine: STDOUT: 
	I0920 11:02:15.579826   10144 main.go:141] libmachine: STDERR: 
	I0920 11:02:15.579891   10144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2 +20000M
	I0920 11:02:15.587987   10144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:15.588002   10144 main.go:141] libmachine: STDERR: 
	I0920 11:02:15.588011   10144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:15.588015   10144 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:15.588025   10144 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:15.588048   10144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:15:c3:37:95:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/false-189000/disk.qcow2
	I0920 11:02:15.589768   10144 main.go:141] libmachine: STDOUT: 
	I0920 11:02:15.589782   10144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:15.589795   10144 client.go:171] duration metric: took 236.501333ms to LocalClient.Create
	I0920 11:02:17.591978   10144 start.go:128] duration metric: took 2.295666709s to createHost
	I0920 11:02:17.592070   10144 start.go:83] releasing machines lock for "false-189000", held for 2.296236s
	W0920 11:02:17.592428   10144 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:17.601974   10144 out.go:201] 
	W0920 11:02:17.610183   10144 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:02:17.610215   10144 out.go:270] * 
	* 
	W0920 11:02:17.611777   10144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:17.622076   10144 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.897312042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-189000" primary control-plane node in "enable-default-cni-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:02:19.822601   10257 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:02:19.822719   10257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:19.822722   10257 out.go:358] Setting ErrFile to fd 2...
	I0920 11:02:19.822724   10257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:19.822861   10257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:02:19.823963   10257 out.go:352] Setting JSON to false
	I0920 11:02:19.840776   10257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5502,"bootTime":1726849837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:02:19.840861   10257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:02:19.848283   10257 out.go:177] * [enable-default-cni-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:02:19.857906   10257 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:02:19.857952   10257 notify.go:220] Checking for updates...
	I0920 11:02:19.865026   10257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:02:19.866586   10257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:02:19.870060   10257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:02:19.873038   10257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:02:19.876117   10257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:02:19.880458   10257 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:02:19.880527   10257 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:02:19.880577   10257 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:02:19.885059   10257 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:02:19.889987   10257 start.go:297] selected driver: qemu2
	I0920 11:02:19.889994   10257 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:02:19.890000   10257 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:02:19.892503   10257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:02:19.895084   10257 out.go:177] * Automatically selected the socket_vmnet network
	E0920 11:02:19.898104   10257 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0920 11:02:19.898117   10257 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:02:19.898137   10257 cni.go:84] Creating CNI manager for "bridge"
	I0920 11:02:19.898141   10257 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:02:19.898165   10257 start.go:340] cluster config:
	{Name:enable-default-cni-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:02:19.901933   10257 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:02:19.909998   10257 out.go:177] * Starting "enable-default-cni-189000" primary control-plane node in "enable-default-cni-189000" cluster
	I0920 11:02:19.913951   10257 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:02:19.913968   10257 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:02:19.913979   10257 cache.go:56] Caching tarball of preloaded images
	I0920 11:02:19.914029   10257 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:02:19.914034   10257 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:02:19.914083   10257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/enable-default-cni-189000/config.json ...
	I0920 11:02:19.914093   10257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/enable-default-cni-189000/config.json: {Name:mk963855fa1389b8f5b391cf8fa7bf44fd036163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:02:19.914503   10257 start.go:360] acquireMachinesLock for enable-default-cni-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:19.914542   10257 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "enable-default-cni-189000"
	I0920 11:02:19.914556   10257 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:19.914592   10257 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:19.919084   10257 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:19.934455   10257 start.go:159] libmachine.API.Create for "enable-default-cni-189000" (driver="qemu2")
	I0920 11:02:19.934480   10257 client.go:168] LocalClient.Create starting
	I0920 11:02:19.934538   10257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:19.934570   10257 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:19.934578   10257 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:19.934616   10257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:19.934642   10257 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:19.934652   10257 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:19.935158   10257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:20.099284   10257 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:20.222020   10257 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:20.222032   10257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:20.222248   10257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:20.231810   10257 main.go:141] libmachine: STDOUT: 
	I0920 11:02:20.231829   10257 main.go:141] libmachine: STDERR: 
	I0920 11:02:20.231885   10257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2 +20000M
	I0920 11:02:20.240075   10257 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:20.240097   10257 main.go:141] libmachine: STDERR: 
	I0920 11:02:20.240110   10257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:20.240117   10257 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:20.240129   10257 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:20.240153   10257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1d:1a:cd:fd:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:20.241948   10257 main.go:141] libmachine: STDOUT: 
	I0920 11:02:20.241965   10257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:20.241985   10257 client.go:171] duration metric: took 307.501458ms to LocalClient.Create
	I0920 11:02:22.243856   10257 start.go:128] duration metric: took 2.329255458s to createHost
	I0920 11:02:22.243909   10257 start.go:83] releasing machines lock for "enable-default-cni-189000", held for 2.329372292s
	W0920 11:02:22.243945   10257 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:22.254986   10257 out.go:177] * Deleting "enable-default-cni-189000" in qemu2 ...
	W0920 11:02:22.280658   10257 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:22.280678   10257 start.go:729] Will try again in 5 seconds ...
	I0920 11:02:27.282893   10257 start.go:360] acquireMachinesLock for enable-default-cni-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:27.283412   10257 start.go:364] duration metric: took 430.667µs to acquireMachinesLock for "enable-default-cni-189000"
	I0920 11:02:27.283559   10257 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:27.283844   10257 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:27.296354   10257 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:27.349759   10257 start.go:159] libmachine.API.Create for "enable-default-cni-189000" (driver="qemu2")
	I0920 11:02:27.349810   10257 client.go:168] LocalClient.Create starting
	I0920 11:02:27.349941   10257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:27.350005   10257 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:27.350021   10257 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:27.350094   10257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:27.350143   10257 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:27.350163   10257 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:27.350741   10257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:27.526137   10257 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:27.614810   10257 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:27.614818   10257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:27.615033   10257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:27.624461   10257 main.go:141] libmachine: STDOUT: 
	I0920 11:02:27.624482   10257 main.go:141] libmachine: STDERR: 
	I0920 11:02:27.624543   10257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2 +20000M
	I0920 11:02:27.632572   10257 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:27.632589   10257 main.go:141] libmachine: STDERR: 
	I0920 11:02:27.632601   10257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:27.632606   10257 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:27.632615   10257 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:27.632642   10257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c2:36:10:36:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/enable-default-cni-189000/disk.qcow2
	I0920 11:02:27.634379   10257 main.go:141] libmachine: STDOUT: 
	I0920 11:02:27.634392   10257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:27.634404   10257 client.go:171] duration metric: took 284.589917ms to LocalClient.Create
	I0920 11:02:29.636650   10257 start.go:128] duration metric: took 2.352780792s to createHost
	I0920 11:02:29.636724   10257 start.go:83] releasing machines lock for "enable-default-cni-189000", held for 2.353299375s
	W0920 11:02:29.637036   10257 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:29.655716   10257 out.go:201] 
	W0920 11:02:29.658683   10257 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:02:29.658705   10257 out.go:270] * 
	* 
	W0920 11:02:29.661238   10257 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:29.678681   10257 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.920318917s)

                                                
                                                
-- stdout --
	* [flannel-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-189000" primary control-plane node in "flannel-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:02:31.836384   10372 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:02:31.836495   10372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:31.836499   10372 out.go:358] Setting ErrFile to fd 2...
	I0920 11:02:31.836502   10372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:31.836634   10372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:02:31.837738   10372 out.go:352] Setting JSON to false
	I0920 11:02:31.854309   10372 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5514,"bootTime":1726849837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:02:31.854383   10372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:02:31.861556   10372 out.go:177] * [flannel-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:02:31.869543   10372 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:02:31.869582   10372 notify.go:220] Checking for updates...
	I0920 11:02:31.877425   10372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:02:31.880483   10372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:02:31.883498   10372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:02:31.886418   10372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:02:31.889515   10372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:02:31.892828   10372 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:02:31.892894   10372 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:02:31.892949   10372 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:02:31.897448   10372 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:02:31.904519   10372 start.go:297] selected driver: qemu2
	I0920 11:02:31.904525   10372 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:02:31.904531   10372 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:02:31.906798   10372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:02:31.910429   10372 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:02:31.913569   10372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:02:31.913587   10372 cni.go:84] Creating CNI manager for "flannel"
	I0920 11:02:31.913591   10372 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0920 11:02:31.913616   10372 start.go:340] cluster config:
	{Name:flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:02:31.917168   10372 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:02:31.925416   10372 out.go:177] * Starting "flannel-189000" primary control-plane node in "flannel-189000" cluster
	I0920 11:02:31.929507   10372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:02:31.929524   10372 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:02:31.929537   10372 cache.go:56] Caching tarball of preloaded images
	I0920 11:02:31.929609   10372 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:02:31.929615   10372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:02:31.929699   10372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/flannel-189000/config.json ...
	I0920 11:02:31.929712   10372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/flannel-189000/config.json: {Name:mk18f9280300fa08688027139d9504a374fd29b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:02:31.929926   10372 start.go:360] acquireMachinesLock for flannel-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:31.929959   10372 start.go:364] duration metric: took 27µs to acquireMachinesLock for "flannel-189000"
	I0920 11:02:31.929972   10372 start.go:93] Provisioning new machine with config: &{Name:flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:31.929996   10372 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:31.938573   10372 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:31.955446   10372 start.go:159] libmachine.API.Create for "flannel-189000" (driver="qemu2")
	I0920 11:02:31.955484   10372 client.go:168] LocalClient.Create starting
	I0920 11:02:31.955555   10372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:31.955587   10372 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:31.955595   10372 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:31.955633   10372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:31.955656   10372 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:31.955666   10372 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:31.956036   10372 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:32.124260   10372 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:32.237157   10372 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:32.237164   10372 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:32.237355   10372 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:32.246838   10372 main.go:141] libmachine: STDOUT: 
	I0920 11:02:32.246850   10372 main.go:141] libmachine: STDERR: 
	I0920 11:02:32.246912   10372 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2 +20000M
	I0920 11:02:32.254801   10372 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:32.254820   10372 main.go:141] libmachine: STDERR: 
	I0920 11:02:32.254836   10372 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:32.254841   10372 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:32.254854   10372 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:32.254892   10372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e6:88:38:09:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:32.256551   10372 main.go:141] libmachine: STDOUT: 
	I0920 11:02:32.256563   10372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:32.256582   10372 client.go:171] duration metric: took 301.093542ms to LocalClient.Create
	I0920 11:02:34.258757   10372 start.go:128] duration metric: took 2.328747875s to createHost
	I0920 11:02:34.258844   10372 start.go:83] releasing machines lock for "flannel-189000", held for 2.328889542s
	W0920 11:02:34.258916   10372 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:34.275799   10372 out.go:177] * Deleting "flannel-189000" in qemu2 ...
	W0920 11:02:34.308231   10372 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:34.308256   10372 start.go:729] Will try again in 5 seconds ...
	I0920 11:02:39.310479   10372 start.go:360] acquireMachinesLock for flannel-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:39.311023   10372 start.go:364] duration metric: took 444.333µs to acquireMachinesLock for "flannel-189000"
	I0920 11:02:39.311189   10372 start.go:93] Provisioning new machine with config: &{Name:flannel-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:39.311552   10372 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:39.324220   10372 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:39.377920   10372 start.go:159] libmachine.API.Create for "flannel-189000" (driver="qemu2")
	I0920 11:02:39.377976   10372 client.go:168] LocalClient.Create starting
	I0920 11:02:39.378108   10372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:39.378180   10372 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:39.378194   10372 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:39.378271   10372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:39.378318   10372 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:39.378336   10372 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:39.378896   10372 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:39.554610   10372 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:39.678068   10372 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:39.678076   10372 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:39.678288   10372 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:39.687559   10372 main.go:141] libmachine: STDOUT: 
	I0920 11:02:39.687578   10372 main.go:141] libmachine: STDERR: 
	I0920 11:02:39.687647   10372 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2 +20000M
	I0920 11:02:39.695679   10372 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:39.695694   10372 main.go:141] libmachine: STDERR: 
	I0920 11:02:39.695707   10372 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:39.695712   10372 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:39.695719   10372 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:39.695764   10372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:99:d6:7e:97:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/flannel-189000/disk.qcow2
	I0920 11:02:39.697428   10372 main.go:141] libmachine: STDOUT: 
	I0920 11:02:39.697441   10372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:39.697453   10372 client.go:171] duration metric: took 319.474791ms to LocalClient.Create
	I0920 11:02:41.699508   10372 start.go:128] duration metric: took 2.387953917s to createHost
	I0920 11:02:41.699526   10372 start.go:83] releasing machines lock for "flannel-189000", held for 2.388496334s
	W0920 11:02:41.699594   10372 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:41.706854   10372 out.go:201] 
	W0920 11:02:41.709756   10372 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:02:41.709761   10372 out.go:270] * 
	* 
	W0920 11:02:41.710278   10372 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:41.719797   10372 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.839447334s)

                                                
                                                
-- stdout --
	* [bridge-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-189000" primary control-plane node in "bridge-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:02:44.027950   10496 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:02:44.028085   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:44.028088   10496 out.go:358] Setting ErrFile to fd 2...
	I0920 11:02:44.028091   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:44.028238   10496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:02:44.029338   10496 out.go:352] Setting JSON to false
	I0920 11:02:44.045560   10496 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5527,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:02:44.045621   10496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:02:44.053679   10496 out.go:177] * [bridge-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:02:44.063422   10496 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:02:44.063456   10496 notify.go:220] Checking for updates...
	I0920 11:02:44.071423   10496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:02:44.074397   10496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:02:44.077387   10496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:02:44.080437   10496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:02:44.083419   10496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:02:44.086748   10496 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:02:44.086810   10496 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:02:44.086853   10496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:02:44.091456   10496 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:02:44.098415   10496 start.go:297] selected driver: qemu2
	I0920 11:02:44.098423   10496 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:02:44.098430   10496 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:02:44.100579   10496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:02:44.103482   10496 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:02:44.106473   10496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:02:44.106490   10496 cni.go:84] Creating CNI manager for "bridge"
	I0920 11:02:44.106499   10496 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:02:44.106550   10496 start.go:340] cluster config:
	{Name:bridge-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:02:44.109947   10496 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:02:44.118483   10496 out.go:177] * Starting "bridge-189000" primary control-plane node in "bridge-189000" cluster
	I0920 11:02:44.122343   10496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:02:44.122355   10496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:02:44.122362   10496 cache.go:56] Caching tarball of preloaded images
	I0920 11:02:44.122414   10496 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:02:44.122419   10496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:02:44.122473   10496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/bridge-189000/config.json ...
	I0920 11:02:44.122486   10496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/bridge-189000/config.json: {Name:mk203c5b6f6a299e8f4bb9a3de093cbb45ad0c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:02:44.122779   10496 start.go:360] acquireMachinesLock for bridge-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:44.122809   10496 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "bridge-189000"
	I0920 11:02:44.122820   10496 start.go:93] Provisioning new machine with config: &{Name:bridge-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:44.122846   10496 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:44.130362   10496 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:44.146143   10496 start.go:159] libmachine.API.Create for "bridge-189000" (driver="qemu2")
	I0920 11:02:44.146175   10496 client.go:168] LocalClient.Create starting
	I0920 11:02:44.146244   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:44.146273   10496 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:44.146282   10496 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:44.146317   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:44.146342   10496 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:44.146352   10496 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:44.146696   10496 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:44.311707   10496 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:44.418832   10496 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:44.418838   10496 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:44.419043   10496 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:44.428960   10496 main.go:141] libmachine: STDOUT: 
	I0920 11:02:44.428993   10496 main.go:141] libmachine: STDERR: 
	I0920 11:02:44.429065   10496 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2 +20000M
	I0920 11:02:44.437419   10496 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:44.437433   10496 main.go:141] libmachine: STDERR: 
	I0920 11:02:44.437454   10496 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:44.437461   10496 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:44.437472   10496 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:44.437499   10496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:18:0b:74:1a:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:44.439198   10496 main.go:141] libmachine: STDOUT: 
	I0920 11:02:44.439214   10496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:44.439235   10496 client.go:171] duration metric: took 293.054583ms to LocalClient.Create
	I0920 11:02:46.441332   10496 start.go:128] duration metric: took 2.318486875s to createHost
	I0920 11:02:46.441412   10496 start.go:83] releasing machines lock for "bridge-189000", held for 2.318610583s
	W0920 11:02:46.441449   10496 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:46.447055   10496 out.go:177] * Deleting "bridge-189000" in qemu2 ...
	W0920 11:02:46.470068   10496 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:46.470079   10496 start.go:729] Will try again in 5 seconds ...
	I0920 11:02:51.472293   10496 start.go:360] acquireMachinesLock for bridge-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:51.472695   10496 start.go:364] duration metric: took 308.916µs to acquireMachinesLock for "bridge-189000"
	I0920 11:02:51.472795   10496 start.go:93] Provisioning new machine with config: &{Name:bridge-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:51.473086   10496 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:51.483730   10496 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:51.526460   10496 start.go:159] libmachine.API.Create for "bridge-189000" (driver="qemu2")
	I0920 11:02:51.526544   10496 client.go:168] LocalClient.Create starting
	I0920 11:02:51.526651   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:51.526715   10496 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:51.526731   10496 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:51.526803   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:51.526842   10496 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:51.526857   10496 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:51.527630   10496 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:51.698409   10496 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:51.766835   10496 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:51.766845   10496 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:51.767027   10496 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:51.776532   10496 main.go:141] libmachine: STDOUT: 
	I0920 11:02:51.776553   10496 main.go:141] libmachine: STDERR: 
	I0920 11:02:51.776616   10496 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2 +20000M
	I0920 11:02:51.784699   10496 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:51.784723   10496 main.go:141] libmachine: STDERR: 
	I0920 11:02:51.784743   10496 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:51.784749   10496 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:51.784757   10496 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:51.784782   10496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:f7:74:c1:05:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/bridge-189000/disk.qcow2
	I0920 11:02:51.786532   10496 main.go:141] libmachine: STDOUT: 
	I0920 11:02:51.786547   10496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:51.786561   10496 client.go:171] duration metric: took 260.012709ms to LocalClient.Create
	I0920 11:02:53.788731   10496 start.go:128] duration metric: took 2.315626333s to createHost
	I0920 11:02:53.788805   10496 start.go:83] releasing machines lock for "bridge-189000", held for 2.316098916s
	W0920 11:02:53.789109   10496 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:53.803936   10496 out.go:201] 
	W0920 11:02:53.808006   10496 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:02:53.808034   10496 out.go:270] * 
	* 
	W0920 11:02:53.810432   10496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:02:53.823788   10496 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-189000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.046122708s)

                                                
                                                
-- stdout --
	* [kubenet-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-189000" primary control-plane node in "kubenet-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:02:56.056976   10614 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:02:56.057122   10614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:56.057127   10614 out.go:358] Setting ErrFile to fd 2...
	I0920 11:02:56.057130   10614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:02:56.057272   10614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:02:56.058407   10614 out.go:352] Setting JSON to false
	I0920 11:02:56.074993   10614 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5539,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:02:56.075060   10614 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:02:56.082322   10614 out.go:177] * [kubenet-189000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:02:56.091286   10614 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:02:56.091395   10614 notify.go:220] Checking for updates...
	I0920 11:02:56.098322   10614 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:02:56.101325   10614 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:02:56.104281   10614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:02:56.107332   10614 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:02:56.110342   10614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:02:56.113559   10614 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:02:56.113624   10614 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:02:56.113682   10614 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:02:56.118294   10614 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:02:56.125394   10614 start.go:297] selected driver: qemu2
	I0920 11:02:56.125401   10614 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:02:56.125407   10614 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:02:56.127516   10614 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:02:56.130349   10614 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:02:56.131937   10614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:02:56.131961   10614 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0920 11:02:56.132000   10614 start.go:340] cluster config:
	{Name:kubenet-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:02:56.135318   10614 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:02:56.143316   10614 out.go:177] * Starting "kubenet-189000" primary control-plane node in "kubenet-189000" cluster
	I0920 11:02:56.147254   10614 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:02:56.147267   10614 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:02:56.147274   10614 cache.go:56] Caching tarball of preloaded images
	I0920 11:02:56.147319   10614 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:02:56.147324   10614 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:02:56.147375   10614 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kubenet-189000/config.json ...
	I0920 11:02:56.147385   10614 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/kubenet-189000/config.json: {Name:mk958190843e07c71a2456ae8cdba1129a58c406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:02:56.147589   10614 start.go:360] acquireMachinesLock for kubenet-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:02:56.147619   10614 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "kubenet-189000"
	I0920 11:02:56.147631   10614 start.go:93] Provisioning new machine with config: &{Name:kubenet-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:02:56.147664   10614 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:02:56.156321   10614 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:02:56.171661   10614 start.go:159] libmachine.API.Create for "kubenet-189000" (driver="qemu2")
	I0920 11:02:56.171692   10614 client.go:168] LocalClient.Create starting
	I0920 11:02:56.171757   10614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:02:56.171791   10614 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:56.171799   10614 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:56.171837   10614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:02:56.171861   10614 main.go:141] libmachine: Decoding PEM data...
	I0920 11:02:56.171874   10614 main.go:141] libmachine: Parsing certificate...
	I0920 11:02:56.172220   10614 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:02:56.338123   10614 main.go:141] libmachine: Creating SSH key...
	I0920 11:02:56.449486   10614 main.go:141] libmachine: Creating Disk image...
	I0920 11:02:56.449498   10614 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:02:56.449717   10614 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:02:56.459353   10614 main.go:141] libmachine: STDOUT: 
	I0920 11:02:56.459375   10614 main.go:141] libmachine: STDERR: 
	I0920 11:02:56.459444   10614 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2 +20000M
	I0920 11:02:56.467542   10614 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:02:56.467561   10614 main.go:141] libmachine: STDERR: 
	I0920 11:02:56.467584   10614 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:02:56.467598   10614 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:02:56.467610   10614 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:02:56.467635   10614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9c:58:16:81:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:02:56.469251   10614 main.go:141] libmachine: STDOUT: 
	I0920 11:02:56.469272   10614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:02:56.469294   10614 client.go:171] duration metric: took 297.597291ms to LocalClient.Create
	I0920 11:02:58.471468   10614 start.go:128] duration metric: took 2.323790208s to createHost
	I0920 11:02:58.471540   10614 start.go:83] releasing machines lock for "kubenet-189000", held for 2.323925292s
	W0920 11:02:58.471603   10614 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:58.482382   10614 out.go:177] * Deleting "kubenet-189000" in qemu2 ...
	W0920 11:02:58.515302   10614 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:02:58.515327   10614 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:03.517575   10614 start.go:360] acquireMachinesLock for kubenet-189000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:03.518073   10614 start.go:364] duration metric: took 400.917µs to acquireMachinesLock for "kubenet-189000"
	I0920 11:03:03.518143   10614 start.go:93] Provisioning new machine with config: &{Name:kubenet-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:03.518371   10614 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:03.547925   10614 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 11:03:03.598117   10614 start.go:159] libmachine.API.Create for "kubenet-189000" (driver="qemu2")
	I0920 11:03:03.598187   10614 client.go:168] LocalClient.Create starting
	I0920 11:03:03.598320   10614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:03.598385   10614 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:03.598402   10614 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:03.598458   10614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:03.598504   10614 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:03.598518   10614 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:03.599210   10614 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:03.774863   10614 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:04.000461   10614 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:04.000471   10614 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:04.000658   10614 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:03:04.009874   10614 main.go:141] libmachine: STDOUT: 
	I0920 11:03:04.009889   10614 main.go:141] libmachine: STDERR: 
	I0920 11:03:04.009954   10614 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2 +20000M
	I0920 11:03:04.018077   10614 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:04.018092   10614 main.go:141] libmachine: STDERR: 
	I0920 11:03:04.018107   10614 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:03:04.018112   10614 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:04.018122   10614 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:04.018171   10614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ef:bd:d5:d2:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/kubenet-189000/disk.qcow2
	I0920 11:03:04.019770   10614 main.go:141] libmachine: STDOUT: 
	I0920 11:03:04.019784   10614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:04.019797   10614 client.go:171] duration metric: took 421.607ms to LocalClient.Create
	I0920 11:03:06.021876   10614 start.go:128] duration metric: took 2.503475542s to createHost
	I0920 11:03:06.021903   10614 start.go:83] releasing machines lock for "kubenet-189000", held for 2.503823459s
	W0920 11:03:06.022077   10614 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:06.042531   10614 out.go:201] 
	W0920 11:03:06.046516   10614 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:06.046531   10614 out.go:270] * 
	* 
	W0920 11:03:06.047398   10614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:06.065559   10614 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.928226917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-048000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-048000" primary control-plane node in "old-k8s-version-048000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-048000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:08.242049   10733 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:08.242176   10733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:08.242179   10733 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:08.242182   10733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:08.242309   10733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:08.243379   10733 out.go:352] Setting JSON to false
	I0920 11:03:08.259590   10733 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5551,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:08.259665   10733 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:08.266182   10733 out.go:177] * [old-k8s-version-048000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:08.276037   10733 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:08.276101   10733 notify.go:220] Checking for updates...
	I0920 11:03:08.281974   10733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:08.285014   10733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:08.288075   10733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:08.289695   10733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:08.293023   10733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:08.296386   10733 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:08.296460   10733 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:03:08.296503   10733 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:08.300885   10733 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:03:08.308024   10733 start.go:297] selected driver: qemu2
	I0920 11:03:08.308030   10733 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:03:08.308036   10733 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:08.310360   10733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:03:08.313040   10733 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:03:08.316116   10733 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:08.316132   10733 cni.go:84] Creating CNI manager for ""
	I0920 11:03:08.316151   10733 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 11:03:08.316189   10733 start.go:340] cluster config:
	{Name:old-k8s-version-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:08.319671   10733 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:08.324118   10733 out.go:177] * Starting "old-k8s-version-048000" primary control-plane node in "old-k8s-version-048000" cluster
	I0920 11:03:08.327996   10733 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 11:03:08.328009   10733 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 11:03:08.328015   10733 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:08.328075   10733 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:08.328081   10733 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 11:03:08.328133   10733 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/old-k8s-version-048000/config.json ...
	I0920 11:03:08.328144   10733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/old-k8s-version-048000/config.json: {Name:mk659aba978bfb90d8b07f5d3af3dd3ff71edd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:03:08.328349   10733 start.go:360] acquireMachinesLock for old-k8s-version-048000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:08.328382   10733 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "old-k8s-version-048000"
	I0920 11:03:08.328394   10733 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:08.328419   10733 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:08.337033   10733 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:08.352180   10733 start.go:159] libmachine.API.Create for "old-k8s-version-048000" (driver="qemu2")
	I0920 11:03:08.352203   10733 client.go:168] LocalClient.Create starting
	I0920 11:03:08.352275   10733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:08.352306   10733 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:08.352315   10733 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:08.352346   10733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:08.352368   10733 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:08.352375   10733 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:08.352752   10733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:08.519232   10733 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:08.618372   10733 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:08.618384   10733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:08.618598   10733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:08.628376   10733 main.go:141] libmachine: STDOUT: 
	I0920 11:03:08.628400   10733 main.go:141] libmachine: STDERR: 
	I0920 11:03:08.628479   10733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2 +20000M
	I0920 11:03:08.636820   10733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:08.636836   10733 main.go:141] libmachine: STDERR: 
	I0920 11:03:08.636850   10733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:08.636857   10733 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:08.636868   10733 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:08.636907   10733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:b6:f3:11:ab:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:08.638536   10733 main.go:141] libmachine: STDOUT: 
	I0920 11:03:08.638550   10733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:08.638569   10733 client.go:171] duration metric: took 286.36275ms to LocalClient.Create
	I0920 11:03:10.640690   10733 start.go:128] duration metric: took 2.31226625s to createHost
	I0920 11:03:10.640715   10733 start.go:83] releasing machines lock for "old-k8s-version-048000", held for 2.312340208s
	W0920 11:03:10.640742   10733 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:10.658692   10733 out.go:177] * Deleting "old-k8s-version-048000" in qemu2 ...
	W0920 11:03:10.680526   10733 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:10.680536   10733 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:15.682650   10733 start.go:360] acquireMachinesLock for old-k8s-version-048000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:15.682868   10733 start.go:364] duration metric: took 163.125µs to acquireMachinesLock for "old-k8s-version-048000"
	I0920 11:03:15.682898   10733 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:15.683001   10733 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:15.691302   10733 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:15.715095   10733 start.go:159] libmachine.API.Create for "old-k8s-version-048000" (driver="qemu2")
	I0920 11:03:15.715130   10733 client.go:168] LocalClient.Create starting
	I0920 11:03:15.715211   10733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:15.715253   10733 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:15.715264   10733 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:15.715305   10733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:15.715335   10733 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:15.715342   10733 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:15.715798   10733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:15.882669   10733 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:16.071624   10733 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:16.071636   10733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:16.071878   10733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:16.081658   10733 main.go:141] libmachine: STDOUT: 
	I0920 11:03:16.081673   10733 main.go:141] libmachine: STDERR: 
	I0920 11:03:16.081734   10733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2 +20000M
	I0920 11:03:16.089904   10733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:16.089921   10733 main.go:141] libmachine: STDERR: 
	I0920 11:03:16.089936   10733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:16.089942   10733 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:16.089950   10733 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:16.089987   10733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ce:ec:6a:06:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:16.091654   10733 main.go:141] libmachine: STDOUT: 
	I0920 11:03:16.091667   10733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:16.091681   10733 client.go:171] duration metric: took 376.548209ms to LocalClient.Create
	I0920 11:03:18.093859   10733 start.go:128] duration metric: took 2.410844542s to createHost
	I0920 11:03:18.093940   10733 start.go:83] releasing machines lock for "old-k8s-version-048000", held for 2.411071917s
	W0920 11:03:18.094364   10733 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:18.105201   10733 out.go:201] 
	W0920 11:03:18.108192   10733 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:18.108222   10733 out.go:270] * 
	* 
	W0920 11:03:18.110707   10733 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:18.127090   10733 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (65.964292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-048000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-048000 create -f testdata/busybox.yaml: exit status 1 (32.315583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-048000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-048000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (30.881291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (29.405541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-048000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-048000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-048000 describe deploy/metrics-server -n kube-system: exit status 1 (27.762666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-048000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-048000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (29.776166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.181719584s)

                                                
                                                
-- stdout --
	* [old-k8s-version-048000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-048000" primary control-plane node in "old-k8s-version-048000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-048000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-048000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:20.655682   10783 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:20.655814   10783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:20.655817   10783 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:20.655820   10783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:20.655997   10783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:20.657083   10783 out.go:352] Setting JSON to false
	I0920 11:03:20.673307   10783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5563,"bootTime":1726849837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:20.673370   10783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:20.678001   10783 out.go:177] * [old-k8s-version-048000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:20.684948   10783 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:20.684994   10783 notify.go:220] Checking for updates...
	I0920 11:03:20.692009   10783 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:20.694962   10783 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:20.697946   10783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:20.700955   10783 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:20.703928   10783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:20.707224   10783 config.go:182] Loaded profile config "old-k8s-version-048000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 11:03:20.710931   10783 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 11:03:20.714926   10783 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:20.718959   10783 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 11:03:20.724943   10783 start.go:297] selected driver: qemu2
	I0920 11:03:20.724948   10783 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:20.724992   10783 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:20.727213   10783 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:20.727240   10783 cni.go:84] Creating CNI manager for ""
	I0920 11:03:20.727269   10783 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 11:03:20.727286   10783 start.go:340] cluster config:
	{Name:old-k8s-version-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-048000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:20.730629   10783 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:20.738932   10783 out.go:177] * Starting "old-k8s-version-048000" primary control-plane node in "old-k8s-version-048000" cluster
	I0920 11:03:20.742975   10783 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 11:03:20.742995   10783 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 11:03:20.743001   10783 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:20.743062   10783 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:20.743068   10783 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 11:03:20.743143   10783 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/old-k8s-version-048000/config.json ...
	I0920 11:03:20.743648   10783 start.go:360] acquireMachinesLock for old-k8s-version-048000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:20.743676   10783 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "old-k8s-version-048000"
	I0920 11:03:20.743685   10783 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:20.743690   10783 fix.go:54] fixHost starting: 
	I0920 11:03:20.743799   10783 fix.go:112] recreateIfNeeded on old-k8s-version-048000: state=Stopped err=<nil>
	W0920 11:03:20.743807   10783 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:20.746888   10783 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-048000" ...
	I0920 11:03:20.755022   10783 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:20.755057   10783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ce:ec:6a:06:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:20.756987   10783 main.go:141] libmachine: STDOUT: 
	I0920 11:03:20.757006   10783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:20.757039   10783 fix.go:56] duration metric: took 13.348375ms for fixHost
	I0920 11:03:20.757044   10783 start.go:83] releasing machines lock for "old-k8s-version-048000", held for 13.364084ms
	W0920 11:03:20.757050   10783 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:20.757079   10783 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:20.757083   10783 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:25.759207   10783 start.go:360] acquireMachinesLock for old-k8s-version-048000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:25.759428   10783 start.go:364] duration metric: took 171.958µs to acquireMachinesLock for "old-k8s-version-048000"
	I0920 11:03:25.759466   10783 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:25.759475   10783 fix.go:54] fixHost starting: 
	I0920 11:03:25.759759   10783 fix.go:112] recreateIfNeeded on old-k8s-version-048000: state=Stopped err=<nil>
	W0920 11:03:25.759769   10783 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:25.770019   10783 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-048000" ...
	I0920 11:03:25.773874   10783 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:25.773961   10783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ce:ec:6a:06:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/old-k8s-version-048000/disk.qcow2
	I0920 11:03:25.777608   10783 main.go:141] libmachine: STDOUT: 
	I0920 11:03:25.777633   10783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:25.777666   10783 fix.go:56] duration metric: took 18.192583ms for fixHost
	I0920 11:03:25.777673   10783 start.go:83] releasing machines lock for "old-k8s-version-048000", held for 18.2365ms
	W0920 11:03:25.777745   10783 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-048000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-048000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:25.786065   10783 out.go:201] 
	W0920 11:03:25.790009   10783 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:25.790017   10783 out.go:270] * 
	* 
	W0920 11:03:25.790853   10783 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:25.801027   10783 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-048000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (40.524041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-048000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (29.608125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-048000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-048000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-048000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.089833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-048000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-048000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (29.917375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-048000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (29.474792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-048000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-048000 --alsologtostderr -v=1: exit status 83 (41.711ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-048000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:26.038316   10804 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:26.039193   10804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:26.039196   10804 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:26.039199   10804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:26.039341   10804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:26.039534   10804 out.go:352] Setting JSON to false
	I0920 11:03:26.039544   10804 mustload.go:65] Loading cluster: old-k8s-version-048000
	I0920 11:03:26.039757   10804 config.go:182] Loaded profile config "old-k8s-version-048000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 11:03:26.043638   10804 out.go:177] * The control-plane node old-k8s-version-048000 host is not running: state=Stopped
	I0920 11:03:26.047608   10804 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-048000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-048000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (30.379542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (30.215583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-048000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.784160292s)

                                                
                                                
-- stdout --
	* [no-preload-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-081000" primary control-plane node in "no-preload-081000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-081000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:26.359067   10821 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:26.359203   10821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:26.359206   10821 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:26.359208   10821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:26.359329   10821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:26.360432   10821 out.go:352] Setting JSON to false
	I0920 11:03:26.377016   10821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5569,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:26.377076   10821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:26.382092   10821 out.go:177] * [no-preload-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:26.389130   10821 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:26.389191   10821 notify.go:220] Checking for updates...
	I0920 11:03:26.397042   10821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:26.400068   10821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:26.403115   10821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:26.406070   10821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:26.409119   10821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:26.412513   10821 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:26.412582   10821 config.go:182] Loaded profile config "stopped-upgrade-423000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 11:03:26.412639   10821 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:26.416083   10821 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:03:26.423065   10821 start.go:297] selected driver: qemu2
	I0920 11:03:26.423072   10821 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:03:26.423079   10821 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:26.425512   10821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:03:26.426885   10821 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:03:26.430194   10821 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:26.430225   10821 cni.go:84] Creating CNI manager for ""
	I0920 11:03:26.430254   10821 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:26.430267   10821 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:03:26.430293   10821 start.go:340] cluster config:
	{Name:no-preload-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:26.433946   10821 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.442006   10821 out.go:177] * Starting "no-preload-081000" primary control-plane node in "no-preload-081000" cluster
	I0920 11:03:26.446092   10821 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:26.446155   10821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/no-preload-081000/config.json ...
	I0920 11:03:26.446169   10821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/no-preload-081000/config.json: {Name:mkd16425dfbc0279f2e68481eb2d680cd4b59b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:03:26.446187   10821 cache.go:107] acquiring lock: {Name:mkc831d2b996411ad9b2ce79b491563b42f25287 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446196   10821 cache.go:107] acquiring lock: {Name:mk2a1761717faa717fa3a2eb4dc9244386285caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446243   10821 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 11:03:26.446248   10821 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 62.583µs
	I0920 11:03:26.446258   10821 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 11:03:26.446264   10821 cache.go:107] acquiring lock: {Name:mk67d80b59e9f7697a0e364977845f6173d61e38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446348   10821 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 11:03:26.446332   10821 cache.go:107] acquiring lock: {Name:mkaff6d5273ff634f449ee5b6007cf26ed1b60f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446369   10821 cache.go:107] acquiring lock: {Name:mkbb6420ffdb6693e507980985a45aa63f0801bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446348   10821 cache.go:107] acquiring lock: {Name:mkb343b6b900343df8126f25514fcfee71a7f7e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446386   10821 cache.go:107] acquiring lock: {Name:mk2cd8767a5cdcfb3326e9f314f63b5f5cf06a4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446359   10821 cache.go:107] acquiring lock: {Name:mkbd28d3a4a2b79dca4b4c353f77c5c25904170b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:26.446354   10821 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 11:03:26.446553   10821 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 11:03:26.446605   10821 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 11:03:26.446656   10821 start.go:360] acquireMachinesLock for no-preload-081000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:26.446696   10821 start.go:364] duration metric: took 36.375µs to acquireMachinesLock for "no-preload-081000"
	I0920 11:03:26.446722   10821 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 11:03:26.446708   10821 start.go:93] Provisioning new machine with config: &{Name:no-preload-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:26.446735   10821 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:26.446836   10821 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 11:03:26.446851   10821 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 11:03:26.451068   10821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:26.458876   10821 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 11:03:26.458888   10821 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 11:03:26.459013   10821 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 11:03:26.460387   10821 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 11:03:26.460431   10821 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 11:03:26.461516   10821 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 11:03:26.461589   10821 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 11:03:26.468091   10821 start.go:159] libmachine.API.Create for "no-preload-081000" (driver="qemu2")
	I0920 11:03:26.468110   10821 client.go:168] LocalClient.Create starting
	I0920 11:03:26.468184   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:26.468215   10821 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:26.468225   10821 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:26.468265   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:26.468290   10821 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:26.468300   10821 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:26.468653   10821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:26.638904   10821 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:26.668693   10821 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:26.668713   10821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:26.668909   10821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:26.678341   10821 main.go:141] libmachine: STDOUT: 
	I0920 11:03:26.678362   10821 main.go:141] libmachine: STDERR: 
	I0920 11:03:26.678416   10821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2 +20000M
	I0920 11:03:26.687745   10821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:26.687770   10821 main.go:141] libmachine: STDERR: 
	I0920 11:03:26.687793   10821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:26.687797   10821 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:26.687812   10821 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:26.687838   10821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f3:43:dd:74:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:26.689715   10821 main.go:141] libmachine: STDOUT: 
	I0920 11:03:26.689757   10821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:26.689790   10821 client.go:171] duration metric: took 221.675417ms to LocalClient.Create
	I0920 11:03:26.852626   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 11:03:26.856740   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0920 11:03:26.866777   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 11:03:26.896895   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0920 11:03:26.907077   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 11:03:26.915279   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 11:03:26.973398   10821 cache.go:162] opening:  /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 11:03:27.050893   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 11:03:27.050913   10821 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 604.599666ms
	I0920 11:03:27.050928   10821 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 11:03:28.689991   10821 start.go:128] duration metric: took 2.243238459s to createHost
	I0920 11:03:28.690062   10821 start.go:83] releasing machines lock for "no-preload-081000", held for 2.243366666s
	W0920 11:03:28.690114   10821 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:28.704486   10821 out.go:177] * Deleting "no-preload-081000" in qemu2 ...
	W0920 11:03:28.737354   10821 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:28.737378   10821 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:29.664999   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 11:03:29.665036   10821 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.218689708s
	I0920 11:03:29.665055   10821 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 11:03:29.724593   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 11:03:29.724634   10821 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.278278083s
	I0920 11:03:29.724685   10821 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 11:03:30.553822   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 11:03:30.553835   10821 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.107669125s
	I0920 11:03:30.553842   10821 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 11:03:30.836994   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 11:03:30.837035   10821 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.390720583s
	I0920 11:03:30.837061   10821 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 11:03:31.357692   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 11:03:31.357747   10821 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.911375375s
	I0920 11:03:31.357779   10821 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 11:03:33.737668   10821 start.go:360] acquireMachinesLock for no-preload-081000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:33.738094   10821 start.go:364] duration metric: took 344.542µs to acquireMachinesLock for "no-preload-081000"
	I0920 11:03:33.738235   10821 start.go:93] Provisioning new machine with config: &{Name:no-preload-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:33.738491   10821 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:33.745041   10821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:33.797029   10821 start.go:159] libmachine.API.Create for "no-preload-081000" (driver="qemu2")
	I0920 11:03:33.797097   10821 client.go:168] LocalClient.Create starting
	I0920 11:03:33.797215   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:33.797282   10821 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:33.797301   10821 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:33.797372   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:33.797416   10821 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:33.797436   10821 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:33.797994   10821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:33.972573   10821 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:34.044723   10821 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:34.044729   10821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:34.044916   10821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:34.054174   10821 main.go:141] libmachine: STDOUT: 
	I0920 11:03:34.054202   10821 main.go:141] libmachine: STDERR: 
	I0920 11:03:34.054267   10821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2 +20000M
	I0920 11:03:34.062359   10821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:34.062375   10821 main.go:141] libmachine: STDERR: 
	I0920 11:03:34.062389   10821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:34.062393   10821 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:34.062403   10821 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:34.062440   10821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e8:46:18:83:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:34.064216   10821 main.go:141] libmachine: STDOUT: 
	I0920 11:03:34.064236   10821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:34.064248   10821 client.go:171] duration metric: took 267.147ms to LocalClient.Create
	I0920 11:03:35.928400   10821 cache.go:157] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 11:03:35.928467   10821 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.482246709s
	I0920 11:03:35.928517   10821 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 11:03:35.928563   10821 cache.go:87] Successfully saved all images to host disk.
	I0920 11:03:36.066457   10821 start.go:128] duration metric: took 2.327940708s to createHost
	I0920 11:03:36.066512   10821 start.go:83] releasing machines lock for "no-preload-081000", held for 2.328406209s
	W0920 11:03:36.066890   10821 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-081000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-081000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:36.077347   10821 out.go:201] 
	W0920 11:03:36.086553   10821 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:36.086581   10821 out.go:270] * 
	* 
	W0920 11:03:36.089483   10821 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:36.100308   10821 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (64.494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.851656292s)

                                                
                                                
-- stdout --
	* [embed-certs-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-228000" primary control-plane node in "embed-certs-228000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-228000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:30.205066   10864 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:30.205182   10864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:30.205185   10864 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:30.205187   10864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:30.205310   10864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:30.206392   10864 out.go:352] Setting JSON to false
	I0920 11:03:30.222741   10864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5573,"bootTime":1726849837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:30.222809   10864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:30.228092   10864 out.go:177] * [embed-certs-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:30.236080   10864 notify.go:220] Checking for updates...
	I0920 11:03:30.241063   10864 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:30.247989   10864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:30.255128   10864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:30.263067   10864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:30.270057   10864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:30.281033   10864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:30.285343   10864 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:30.285421   10864 config.go:182] Loaded profile config "no-preload-081000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:30.285480   10864 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:30.290088   10864 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:03:30.296016   10864 start.go:297] selected driver: qemu2
	I0920 11:03:30.296021   10864 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:03:30.296026   10864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:30.298337   10864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:03:30.302073   10864 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:03:30.306106   10864 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:30.306124   10864 cni.go:84] Creating CNI manager for ""
	I0920 11:03:30.306146   10864 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:30.306152   10864 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:03:30.306188   10864 start.go:340] cluster config:
	{Name:embed-certs-228000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:30.309900   10864 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:30.314571   10864 out.go:177] * Starting "embed-certs-228000" primary control-plane node in "embed-certs-228000" cluster
	I0920 11:03:30.318091   10864 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:30.318111   10864 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:03:30.318120   10864 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:30.318188   10864 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:30.318195   10864 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:03:30.318263   10864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/embed-certs-228000/config.json ...
	I0920 11:03:30.318275   10864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/embed-certs-228000/config.json: {Name:mk899c0e5fd638778c5d67930f63b467a9640e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:03:30.318579   10864 start.go:360] acquireMachinesLock for embed-certs-228000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:30.318613   10864 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "embed-certs-228000"
	I0920 11:03:30.318625   10864 start.go:93] Provisioning new machine with config: &{Name:embed-certs-228000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:30.318653   10864 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:30.322063   10864 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:30.339531   10864 start.go:159] libmachine.API.Create for "embed-certs-228000" (driver="qemu2")
	I0920 11:03:30.339558   10864 client.go:168] LocalClient.Create starting
	I0920 11:03:30.339625   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:30.339656   10864 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:30.339665   10864 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:30.339705   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:30.339728   10864 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:30.339737   10864 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:30.340070   10864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:30.505529   10864 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:30.536301   10864 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:30.536308   10864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:30.536505   10864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:30.545806   10864 main.go:141] libmachine: STDOUT: 
	I0920 11:03:30.545829   10864 main.go:141] libmachine: STDERR: 
	I0920 11:03:30.545894   10864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2 +20000M
	I0920 11:03:30.557550   10864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:30.557566   10864 main.go:141] libmachine: STDERR: 
	I0920 11:03:30.557584   10864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:30.557590   10864 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:30.557603   10864 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:30.557633   10864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:d0:bb:ca:07:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:30.559350   10864 main.go:141] libmachine: STDOUT: 
	I0920 11:03:30.559364   10864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:30.559380   10864 client.go:171] duration metric: took 219.817042ms to LocalClient.Create
	I0920 11:03:32.561544   10864 start.go:128] duration metric: took 2.242878584s to createHost
	I0920 11:03:32.561614   10864 start.go:83] releasing machines lock for "embed-certs-228000", held for 2.243002625s
	W0920 11:03:32.561709   10864 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:32.574092   10864 out.go:177] * Deleting "embed-certs-228000" in qemu2 ...
	W0920 11:03:32.615979   10864 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:32.616017   10864 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:37.618237   10864 start.go:360] acquireMachinesLock for embed-certs-228000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:37.618763   10864 start.go:364] duration metric: took 407.375µs to acquireMachinesLock for "embed-certs-228000"
	I0920 11:03:37.618929   10864 start.go:93] Provisioning new machine with config: &{Name:embed-certs-228000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:37.619273   10864 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:37.629044   10864 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:37.680958   10864 start.go:159] libmachine.API.Create for "embed-certs-228000" (driver="qemu2")
	I0920 11:03:37.681030   10864 client.go:168] LocalClient.Create starting
	I0920 11:03:37.681132   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:37.681184   10864 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:37.681202   10864 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:37.681280   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:37.681312   10864 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:37.681325   10864 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:37.681877   10864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:37.906519   10864 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:37.966377   10864 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:37.966385   10864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:37.966582   10864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:37.975880   10864 main.go:141] libmachine: STDOUT: 
	I0920 11:03:37.975909   10864 main.go:141] libmachine: STDERR: 
	I0920 11:03:37.975961   10864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2 +20000M
	I0920 11:03:37.983714   10864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:37.983745   10864 main.go:141] libmachine: STDERR: 
	I0920 11:03:37.983758   10864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:37.983763   10864 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:37.983773   10864 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:37.983840   10864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cc:b1:fd:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:37.985495   10864 main.go:141] libmachine: STDOUT: 
	I0920 11:03:37.985512   10864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:37.985526   10864 client.go:171] duration metric: took 304.492792ms to LocalClient.Create
	I0920 11:03:39.987680   10864 start.go:128] duration metric: took 2.368389917s to createHost
	I0920 11:03:39.987797   10864 start.go:83] releasing machines lock for "embed-certs-228000", held for 2.368975083s
	W0920 11:03:39.988049   10864 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-228000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-228000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:39.996808   10864 out.go:201] 
	W0920 11:03:39.999847   10864 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:39.999867   10864 out.go:270] * 
	* 
	W0920 11:03:40.001918   10864 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:40.012772   10864 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (69.047791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-081000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-081000 create -f testdata/busybox.yaml: exit status 1 (29.618042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-081000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-081000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.546667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.80175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-081000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-081000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-081000 describe deploy/metrics-server -n kube-system: exit status 1 (27.077125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-081000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-081000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (30.045167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-228000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-228000 create -f testdata/busybox.yaml: exit status 1 (30.521625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-228000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-228000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.290791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (31.03625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.218540375s)

                                                
                                                
-- stdout --
	* [no-preload-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-081000" primary control-plane node in "no-preload-081000" cluster
	* Restarting existing qemu2 VM for "no-preload-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-081000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:40.199935   10924 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:40.200091   10924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:40.200094   10924 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:40.200097   10924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:40.200238   10924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:40.201275   10924 out.go:352] Setting JSON to false
	I0920 11:03:40.220163   10924 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5583,"bootTime":1726849837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:40.220240   10924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:40.223537   10924 out.go:177] * [no-preload-081000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:40.232646   10924 notify.go:220] Checking for updates...
	I0920 11:03:40.237614   10924 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:40.248532   10924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:40.256611   10924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:40.267563   10924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:40.270590   10924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:40.277573   10924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:40.281991   10924 config.go:182] Loaded profile config "no-preload-081000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:40.282266   10924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:40.286558   10924 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 11:03:40.293566   10924 start.go:297] selected driver: qemu2
	I0920 11:03:40.293572   10924 start.go:901] validating driver "qemu2" against &{Name:no-preload-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-081000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:40.293638   10924 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:40.296317   10924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:40.296349   10924 cni.go:84] Creating CNI manager for ""
	I0920 11:03:40.296376   10924 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:40.296417   10924 start.go:340] cluster config:
	{Name:no-preload-081000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-081000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:40.300295   10924 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.307541   10924 out.go:177] * Starting "no-preload-081000" primary control-plane node in "no-preload-081000" cluster
	I0920 11:03:40.311532   10924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:40.311609   10924 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/no-preload-081000/config.json ...
	I0920 11:03:40.311632   10924 cache.go:107] acquiring lock: {Name:mkc831d2b996411ad9b2ce79b491563b42f25287 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311652   10924 cache.go:107] acquiring lock: {Name:mkb343b6b900343df8126f25514fcfee71a7f7e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311714   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 11:03:40.311720   10924 cache.go:107] acquiring lock: {Name:mk2cd8767a5cdcfb3326e9f314f63b5f5cf06a4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311635   10924 cache.go:107] acquiring lock: {Name:mkbb6420ffdb6693e507980985a45aa63f0801bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311744   10924 cache.go:107] acquiring lock: {Name:mkbd28d3a4a2b79dca4b4c353f77c5c25904170b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311760   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 11:03:40.311766   10924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 135.167µs
	I0920 11:03:40.311771   10924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 11:03:40.311714   10924 cache.go:107] acquiring lock: {Name:mkaff6d5273ff634f449ee5b6007cf26ed1b60f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311713   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 11:03:40.311808   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 11:03:40.311811   10924 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 119.167µs
	I0920 11:03:40.311814   10924 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 11:03:40.311813   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 11:03:40.311819   10924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 76.292µs
	I0920 11:03:40.311825   10924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 11:03:40.311816   10924 cache.go:107] acquiring lock: {Name:mk67d80b59e9f7697a0e364977845f6173d61e38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311827   10924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 168.583µs
	I0920 11:03:40.311838   10924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 11:03:40.311757   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 11:03:40.311722   10924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.459µs
	I0920 11:03:40.311844   10924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 11:03:40.311843   10924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 123.5µs
	I0920 11:03:40.311848   10924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 11:03:40.311874   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 11:03:40.311878   10924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 72.958µs
	I0920 11:03:40.311882   10924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 11:03:40.311828   10924 cache.go:107] acquiring lock: {Name:mk2a1761717faa717fa3a2eb4dc9244386285caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:40.311948   10924 cache.go:115] /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 11:03:40.311957   10924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 209.166µs
	I0920 11:03:40.311959   10924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 11:03:40.311965   10924 cache.go:87] Successfully saved all images to host disk.
	I0920 11:03:40.312074   10924 start.go:360] acquireMachinesLock for no-preload-081000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:40.312104   10924 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "no-preload-081000"
	I0920 11:03:40.312113   10924 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:40.312117   10924 fix.go:54] fixHost starting: 
	I0920 11:03:40.312231   10924 fix.go:112] recreateIfNeeded on no-preload-081000: state=Stopped err=<nil>
	W0920 11:03:40.312239   10924 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:40.320600   10924 out.go:177] * Restarting existing qemu2 VM for "no-preload-081000" ...
	I0920 11:03:40.324523   10924 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:40.324567   10924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e8:46:18:83:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:40.326563   10924 main.go:141] libmachine: STDOUT: 
	I0920 11:03:40.326584   10924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:40.326616   10924 fix.go:56] duration metric: took 14.495416ms for fixHost
	I0920 11:03:40.326621   10924 start.go:83] releasing machines lock for "no-preload-081000", held for 14.513333ms
	W0920 11:03:40.326628   10924 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:40.326663   10924 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:40.326667   10924 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:45.328877   10924 start.go:360] acquireMachinesLock for no-preload-081000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:45.329264   10924 start.go:364] duration metric: took 289.834µs to acquireMachinesLock for "no-preload-081000"
	I0920 11:03:45.329421   10924 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:45.329440   10924 fix.go:54] fixHost starting: 
	I0920 11:03:45.330198   10924 fix.go:112] recreateIfNeeded on no-preload-081000: state=Stopped err=<nil>
	W0920 11:03:45.330224   10924 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:45.337391   10924 out.go:177] * Restarting existing qemu2 VM for "no-preload-081000" ...
	I0920 11:03:45.341535   10924 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:45.341812   10924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e8:46:18:83:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/no-preload-081000/disk.qcow2
	I0920 11:03:45.351109   10924 main.go:141] libmachine: STDOUT: 
	I0920 11:03:45.351171   10924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:45.351243   10924 fix.go:56] duration metric: took 21.803709ms for fixHost
	I0920 11:03:45.351261   10924 start.go:83] releasing machines lock for "no-preload-081000", held for 21.974791ms
	W0920 11:03:45.351465   10924 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-081000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-081000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:45.359615   10924 out.go:201] 
	W0920 11:03:45.363834   10924 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:45.363894   10924 out.go:270] * 
	* 
	W0920 11:03:45.366216   10924 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:45.374579   10924 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (66.606667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-228000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-228000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-228000 describe deploy/metrics-server -n kube-system: exit status 1 (28.240375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-228000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-228000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.031167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.198349833s)

                                                
                                                
-- stdout --
	* [embed-certs-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-228000" primary control-plane node in "embed-certs-228000" cluster
	* Restarting existing qemu2 VM for "embed-certs-228000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-228000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:44.028019   10959 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:44.028140   10959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:44.028143   10959 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:44.028146   10959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:44.028277   10959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:44.029316   10959 out.go:352] Setting JSON to false
	I0920 11:03:44.045474   10959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5587,"bootTime":1726849837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:44.045552   10959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:44.050354   10959 out.go:177] * [embed-certs-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:44.057397   10959 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:44.057492   10959 notify.go:220] Checking for updates...
	I0920 11:03:44.063302   10959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:44.066377   10959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:44.069270   10959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:44.072226   10959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:44.075284   10959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:44.078607   10959 config.go:182] Loaded profile config "embed-certs-228000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:44.078884   10959 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:44.083337   10959 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 11:03:44.089214   10959 start.go:297] selected driver: qemu2
	I0920 11:03:44.089220   10959 start.go:901] validating driver "qemu2" against &{Name:embed-certs-228000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:44.089274   10959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:44.091586   10959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:44.091613   10959 cni.go:84] Creating CNI manager for ""
	I0920 11:03:44.091648   10959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:44.091676   10959 start.go:340] cluster config:
	{Name:embed-certs-228000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-228000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:44.095053   10959 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:44.103357   10959 out.go:177] * Starting "embed-certs-228000" primary control-plane node in "embed-certs-228000" cluster
	I0920 11:03:44.107328   10959 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:44.107344   10959 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:03:44.107358   10959 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:44.107429   10959 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:44.107436   10959 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:03:44.107501   10959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/embed-certs-228000/config.json ...
	I0920 11:03:44.108027   10959 start.go:360] acquireMachinesLock for embed-certs-228000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:44.108055   10959 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "embed-certs-228000"
	I0920 11:03:44.108064   10959 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:44.108068   10959 fix.go:54] fixHost starting: 
	I0920 11:03:44.108184   10959 fix.go:112] recreateIfNeeded on embed-certs-228000: state=Stopped err=<nil>
	W0920 11:03:44.108192   10959 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:44.116279   10959 out.go:177] * Restarting existing qemu2 VM for "embed-certs-228000" ...
	I0920 11:03:44.120292   10959 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:44.120339   10959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cc:b1:fd:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:44.122293   10959 main.go:141] libmachine: STDOUT: 
	I0920 11:03:44.122318   10959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:44.122349   10959 fix.go:56] duration metric: took 14.280417ms for fixHost
	I0920 11:03:44.122355   10959 start.go:83] releasing machines lock for "embed-certs-228000", held for 14.2965ms
	W0920 11:03:44.122361   10959 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:44.122390   10959 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:44.122395   10959 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:49.124508   10959 start.go:360] acquireMachinesLock for embed-certs-228000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:49.124972   10959 start.go:364] duration metric: took 378.583µs to acquireMachinesLock for "embed-certs-228000"
	I0920 11:03:49.125094   10959 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:49.125112   10959 fix.go:54] fixHost starting: 
	I0920 11:03:49.125835   10959 fix.go:112] recreateIfNeeded on embed-certs-228000: state=Stopped err=<nil>
	W0920 11:03:49.125861   10959 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:49.147286   10959 out.go:177] * Restarting existing qemu2 VM for "embed-certs-228000" ...
	I0920 11:03:49.152166   10959 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:49.152373   10959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cc:b1:fd:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/embed-certs-228000/disk.qcow2
	I0920 11:03:49.161922   10959 main.go:141] libmachine: STDOUT: 
	I0920 11:03:49.162000   10959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:49.162116   10959 fix.go:56] duration metric: took 37.00325ms for fixHost
	I0920 11:03:49.162151   10959 start.go:83] releasing machines lock for "embed-certs-228000", held for 37.1535ms
	W0920 11:03:49.162359   10959 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-228000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-228000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:49.171217   10959 out.go:201] 
	W0920 11:03:49.174192   10959 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:49.174219   10959 out.go:270] * 
	* 
	W0920 11:03:49.177028   10959 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:49.184097   10959 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-228000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (66.946541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-081000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (33.377084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-081000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-081000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-081000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.672292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-081000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-081000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.074708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-081000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.435042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-081000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-081000 --alsologtostderr -v=1: exit status 83 (40.129375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-081000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-081000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:45.645309   10978 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:45.645459   10978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:45.645462   10978 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:45.645464   10978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:45.645614   10978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:45.645846   10978 out.go:352] Setting JSON to false
	I0920 11:03:45.645853   10978 mustload.go:65] Loading cluster: no-preload-081000
	I0920 11:03:45.646061   10978 config.go:182] Loaded profile config "no-preload-081000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:45.650044   10978 out.go:177] * The control-plane node no-preload-081000 host is not running: state=Stopped
	I0920 11:03:45.653891   10978 out.go:177]   To start a cluster, run: "minikube start -p no-preload-081000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-081000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.6265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (29.004958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-081000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.125243958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-676000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-676000" primary control-plane node in "default-k8s-diff-port-676000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:46.066653   11002 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:46.066768   11002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:46.066771   11002 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:46.066773   11002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:46.066907   11002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:46.067950   11002 out.go:352] Setting JSON to false
	I0920 11:03:46.083981   11002 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5589,"bootTime":1726849837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:46.084043   11002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:46.088035   11002 out.go:177] * [default-k8s-diff-port-676000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:46.093970   11002 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:46.093996   11002 notify.go:220] Checking for updates...
	I0920 11:03:46.099893   11002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:46.102922   11002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:46.105957   11002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:46.107555   11002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:46.110901   11002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:46.119394   11002 config.go:182] Loaded profile config "embed-certs-228000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:46.119458   11002 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:46.119506   11002 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:46.122810   11002 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:03:46.129939   11002 start.go:297] selected driver: qemu2
	I0920 11:03:46.129946   11002 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:03:46.129953   11002 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:46.132238   11002 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 11:03:46.133784   11002 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:03:46.137025   11002 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:46.137048   11002 cni.go:84] Creating CNI manager for ""
	I0920 11:03:46.137090   11002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:46.137104   11002 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:03:46.137141   11002 start.go:340] cluster config:
	{Name:default-k8s-diff-port-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:46.140827   11002 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:46.148902   11002 out.go:177] * Starting "default-k8s-diff-port-676000" primary control-plane node in "default-k8s-diff-port-676000" cluster
	I0920 11:03:46.152932   11002 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:46.152946   11002 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:03:46.152955   11002 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:46.153023   11002 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:46.153028   11002 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:03:46.153099   11002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/default-k8s-diff-port-676000/config.json ...
	I0920 11:03:46.153110   11002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/default-k8s-diff-port-676000/config.json: {Name:mk87eab6af3792d9f5464edd2ba5ed2f354c34b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:03:46.153544   11002 start.go:360] acquireMachinesLock for default-k8s-diff-port-676000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:46.153583   11002 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "default-k8s-diff-port-676000"
	I0920 11:03:46.153597   11002 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:46.153628   11002 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:46.160893   11002 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:46.178587   11002 start.go:159] libmachine.API.Create for "default-k8s-diff-port-676000" (driver="qemu2")
	I0920 11:03:46.178613   11002 client.go:168] LocalClient.Create starting
	I0920 11:03:46.178687   11002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:46.178719   11002 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:46.178730   11002 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:46.178774   11002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:46.178797   11002 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:46.178805   11002 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:46.179288   11002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:46.348836   11002 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:46.599580   11002 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:46.599588   11002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:46.599849   11002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:46.609901   11002 main.go:141] libmachine: STDOUT: 
	I0920 11:03:46.609921   11002 main.go:141] libmachine: STDERR: 
	I0920 11:03:46.609982   11002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2 +20000M
	I0920 11:03:46.617915   11002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:46.617938   11002 main.go:141] libmachine: STDERR: 
	I0920 11:03:46.617959   11002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:46.617965   11002 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:46.617977   11002 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:46.618003   11002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b6:27:8e:be:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:46.619592   11002 main.go:141] libmachine: STDOUT: 
	I0920 11:03:46.619606   11002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:46.619628   11002 client.go:171] duration metric: took 441.012125ms to LocalClient.Create
	I0920 11:03:48.621795   11002 start.go:128] duration metric: took 2.46815975s to createHost
	I0920 11:03:48.621907   11002 start.go:83] releasing machines lock for "default-k8s-diff-port-676000", held for 2.468326875s
	W0920 11:03:48.621974   11002 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:48.641083   11002 out.go:177] * Deleting "default-k8s-diff-port-676000" in qemu2 ...
	W0920 11:03:48.676784   11002 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:48.676802   11002 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:53.678993   11002 start.go:360] acquireMachinesLock for default-k8s-diff-port-676000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:53.679441   11002 start.go:364] duration metric: took 353.083µs to acquireMachinesLock for "default-k8s-diff-port-676000"
	I0920 11:03:53.679584   11002 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:53.679867   11002 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:53.685541   11002 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:53.735118   11002 start.go:159] libmachine.API.Create for "default-k8s-diff-port-676000" (driver="qemu2")
	I0920 11:03:53.735165   11002 client.go:168] LocalClient.Create starting
	I0920 11:03:53.735285   11002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:53.735354   11002 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:53.735370   11002 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:53.735427   11002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:53.735471   11002 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:53.735486   11002 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:53.736142   11002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:53.960891   11002 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:54.098767   11002 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:54.098777   11002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:54.098980   11002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:54.108269   11002 main.go:141] libmachine: STDOUT: 
	I0920 11:03:54.108284   11002 main.go:141] libmachine: STDERR: 
	I0920 11:03:54.108341   11002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2 +20000M
	I0920 11:03:54.116159   11002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:54.116174   11002 main.go:141] libmachine: STDERR: 
	I0920 11:03:54.116187   11002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:54.116195   11002 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:54.116206   11002 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:54.116246   11002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a9:80:ee:af:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:54.117895   11002 main.go:141] libmachine: STDOUT: 
	I0920 11:03:54.117911   11002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:54.117925   11002 client.go:171] duration metric: took 382.754167ms to LocalClient.Create
	I0920 11:03:56.118515   11002 start.go:128] duration metric: took 2.438622459s to createHost
	I0920 11:03:56.118595   11002 start.go:83] releasing machines lock for "default-k8s-diff-port-676000", held for 2.43914325s
	W0920 11:03:56.118960   11002 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:56.132165   11002 out.go:201] 
	W0920 11:03:56.137429   11002 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:56.137456   11002 out.go:270] * 
	* 
	W0920 11:03:56.139968   11002 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:56.149224   11002 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (65.974792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-228000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (31.71025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-228000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-228000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-228000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.104333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-228000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-228000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.809042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-228000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.511959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-228000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-228000 --alsologtostderr -v=1: exit status 83 (41.817917ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-228000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-228000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:49.453489   11027 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:49.453963   11027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:49.453977   11027 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:49.453985   11027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:49.454565   11027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:49.454821   11027 out.go:352] Setting JSON to false
	I0920 11:03:49.454841   11027 mustload.go:65] Loading cluster: embed-certs-228000
	I0920 11:03:49.455064   11027 config.go:182] Loaded profile config "embed-certs-228000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:49.458188   11027 out.go:177] * The control-plane node embed-certs-228000 host is not running: state=Stopped
	I0920 11:03:49.462204   11027 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-228000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-228000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.425375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (29.717416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-228000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.798219625s)

                                                
                                                
-- stdout --
	* [newest-cni-082000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-082000" primary control-plane node in "newest-cni-082000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-082000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:49.769386   11044 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:49.769520   11044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:49.769523   11044 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:49.769526   11044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:49.769642   11044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:49.770761   11044 out.go:352] Setting JSON to false
	I0920 11:03:49.787027   11044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5592,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:49.787092   11044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:49.792170   11044 out.go:177] * [newest-cni-082000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:49.799254   11044 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:49.799285   11044 notify.go:220] Checking for updates...
	I0920 11:03:49.807151   11044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:49.810205   11044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:49.813123   11044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:49.816354   11044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:49.819131   11044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:49.822528   11044 config.go:182] Loaded profile config "default-k8s-diff-port-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:49.822592   11044 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:49.822652   11044 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:49.827139   11044 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 11:03:49.834124   11044 start.go:297] selected driver: qemu2
	I0920 11:03:49.834133   11044 start.go:901] validating driver "qemu2" against <nil>
	I0920 11:03:49.834141   11044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:49.836587   11044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 11:03:49.836627   11044 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 11:03:49.841141   11044 out.go:177] * Automatically selected the socket_vmnet network
	I0920 11:03:49.848148   11044 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 11:03:49.848165   11044 cni.go:84] Creating CNI manager for ""
	I0920 11:03:49.848187   11044 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:49.848192   11044 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 11:03:49.848225   11044 start.go:340] cluster config:
	{Name:newest-cni-082000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-082000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:49.852095   11044 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:49.860951   11044 out.go:177] * Starting "newest-cni-082000" primary control-plane node in "newest-cni-082000" cluster
	I0920 11:03:49.865148   11044 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:49.865166   11044 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:03:49.865173   11044 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:49.865254   11044 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:49.865260   11044 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:03:49.865323   11044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/newest-cni-082000/config.json ...
	I0920 11:03:49.865339   11044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/newest-cni-082000/config.json: {Name:mk101cff211daa3fa341940de8ec87432a438b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 11:03:49.865562   11044 start.go:360] acquireMachinesLock for newest-cni-082000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:49.865601   11044 start.go:364] duration metric: took 33.458µs to acquireMachinesLock for "newest-cni-082000"
	I0920 11:03:49.865615   11044 start.go:93] Provisioning new machine with config: &{Name:newest-cni-082000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-082000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:49.865643   11044 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:49.872128   11044 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:49.890584   11044 start.go:159] libmachine.API.Create for "newest-cni-082000" (driver="qemu2")
	I0920 11:03:49.890614   11044 client.go:168] LocalClient.Create starting
	I0920 11:03:49.890678   11044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:49.890708   11044 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:49.890721   11044 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:49.890759   11044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:49.890781   11044 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:49.890788   11044 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:49.891123   11044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:50.057685   11044 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:50.088603   11044 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:50.088608   11044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:50.088796   11044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:50.097863   11044 main.go:141] libmachine: STDOUT: 
	I0920 11:03:50.097879   11044 main.go:141] libmachine: STDERR: 
	I0920 11:03:50.097948   11044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2 +20000M
	I0920 11:03:50.105823   11044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:50.105841   11044 main.go:141] libmachine: STDERR: 
	I0920 11:03:50.105859   11044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:50.105866   11044 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:50.105878   11044 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:50.105905   11044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:9c:ee:6d:ea:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:50.107523   11044 main.go:141] libmachine: STDOUT: 
	I0920 11:03:50.107538   11044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:50.107564   11044 client.go:171] duration metric: took 216.94425ms to LocalClient.Create
	I0920 11:03:52.109732   11044 start.go:128] duration metric: took 2.244079958s to createHost
	I0920 11:03:52.109781   11044 start.go:83] releasing machines lock for "newest-cni-082000", held for 2.244181166s
	W0920 11:03:52.109852   11044 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:52.120987   11044 out.go:177] * Deleting "newest-cni-082000" in qemu2 ...
	W0920 11:03:52.164277   11044 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:52.164304   11044 start.go:729] Will try again in 5 seconds ...
	I0920 11:03:57.166525   11044 start.go:360] acquireMachinesLock for newest-cni-082000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:57.166920   11044 start.go:364] duration metric: took 299.5µs to acquireMachinesLock for "newest-cni-082000"
	I0920 11:03:57.167093   11044 start.go:93] Provisioning new machine with config: &{Name:newest-cni-082000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-082000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 11:03:57.167499   11044 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 11:03:57.173230   11044 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 11:03:57.223116   11044 start.go:159] libmachine.API.Create for "newest-cni-082000" (driver="qemu2")
	I0920 11:03:57.223170   11044 client.go:168] LocalClient.Create starting
	I0920 11:03:57.223287   11044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/ca.pem
	I0920 11:03:57.223337   11044 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:57.223354   11044 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:57.223415   11044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19678-6679/.minikube/certs/cert.pem
	I0920 11:03:57.223445   11044 main.go:141] libmachine: Decoding PEM data...
	I0920 11:03:57.223456   11044 main.go:141] libmachine: Parsing certificate...
	I0920 11:03:57.224154   11044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 11:03:57.433168   11044 main.go:141] libmachine: Creating SSH key...
	I0920 11:03:57.484692   11044 main.go:141] libmachine: Creating Disk image...
	I0920 11:03:57.484698   11044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 11:03:57.484882   11044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2.raw /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:57.494690   11044 main.go:141] libmachine: STDOUT: 
	I0920 11:03:57.494707   11044 main.go:141] libmachine: STDERR: 
	I0920 11:03:57.494779   11044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2 +20000M
	I0920 11:03:57.502570   11044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 11:03:57.502596   11044 main.go:141] libmachine: STDERR: 
	I0920 11:03:57.502608   11044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:57.502613   11044 main.go:141] libmachine: Starting QEMU VM...
	I0920 11:03:57.502621   11044 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:57.502648   11044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:51:e1:85:89:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:03:57.504271   11044 main.go:141] libmachine: STDOUT: 
	I0920 11:03:57.504285   11044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:57.504297   11044 client.go:171] duration metric: took 281.120875ms to LocalClient.Create
	I0920 11:03:59.506349   11044 start.go:128] duration metric: took 2.338846209s to createHost
	I0920 11:03:59.506365   11044 start.go:83] releasing machines lock for "newest-cni-082000", held for 2.33943925s
	W0920 11:03:59.506439   11044 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-082000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-082000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:59.513689   11044 out.go:201] 
	W0920 11:03:59.519197   11044 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:59.519204   11044 out.go:270] * 
	* 
	W0920 11:03:59.519944   11044 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:03:59.531226   11044 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (35.3755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-676000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676000 create -f testdata/busybox.yaml: exit status 1 (29.344084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-676000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (28.791625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (28.633708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-676000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-676000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676000 describe deploy/metrics-server -n kube-system: exit status 1 (27.284458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-676000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (29.686292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.203409792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-676000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-676000" primary control-plane node in "default-k8s-diff-port-676000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-676000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-676000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:03:59.622805   11100 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:03:59.622939   11100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:59.622943   11100 out.go:358] Setting ErrFile to fd 2...
	I0920 11:03:59.622945   11100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:03:59.623069   11100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:03:59.624092   11100 out.go:352] Setting JSON to false
	I0920 11:03:59.641660   11100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5602,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:03:59.641743   11100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:03:59.645212   11100 out.go:177] * [default-k8s-diff-port-676000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:03:59.652194   11100 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:03:59.652202   11100 notify.go:220] Checking for updates...
	I0920 11:03:59.658178   11100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:03:59.665200   11100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:03:59.672160   11100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:03:59.680167   11100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:03:59.688161   11100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:03:59.691466   11100 config.go:182] Loaded profile config "default-k8s-diff-port-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:03:59.691720   11100 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:03:59.695131   11100 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 11:03:59.702191   11100 start.go:297] selected driver: qemu2
	I0920 11:03:59.702195   11100 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:59.702238   11100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:03:59.704628   11100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 11:03:59.704703   11100 cni.go:84] Creating CNI manager for ""
	I0920 11:03:59.704724   11100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:03:59.704751   11100 start.go:340] cluster config:
	{Name:default-k8s-diff-port-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-676000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:03:59.708117   11100 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:03:59.716168   11100 out.go:177] * Starting "default-k8s-diff-port-676000" primary control-plane node in "default-k8s-diff-port-676000" cluster
	I0920 11:03:59.719129   11100 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:03:59.719147   11100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:03:59.719153   11100 cache.go:56] Caching tarball of preloaded images
	I0920 11:03:59.719217   11100 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:03:59.719223   11100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:03:59.719281   11100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/default-k8s-diff-port-676000/config.json ...
	I0920 11:03:59.719688   11100 start.go:360] acquireMachinesLock for default-k8s-diff-port-676000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:03:59.719719   11100 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "default-k8s-diff-port-676000"
	I0920 11:03:59.719730   11100 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:03:59.719735   11100 fix.go:54] fixHost starting: 
	I0920 11:03:59.719851   11100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-676000: state=Stopped err=<nil>
	W0920 11:03:59.719859   11100 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:03:59.723232   11100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-676000" ...
	I0920 11:03:59.731209   11100 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:03:59.731254   11100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a9:80:ee:af:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:03:59.733372   11100 main.go:141] libmachine: STDOUT: 
	I0920 11:03:59.733395   11100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:03:59.733425   11100 fix.go:56] duration metric: took 13.689875ms for fixHost
	I0920 11:03:59.733429   11100 start.go:83] releasing machines lock for "default-k8s-diff-port-676000", held for 13.705625ms
	W0920 11:03:59.733438   11100 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:03:59.733490   11100 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:03:59.733495   11100 start.go:729] Will try again in 5 seconds ...
	I0920 11:04:04.735682   11100 start.go:360] acquireMachinesLock for default-k8s-diff-port-676000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:04:04.736141   11100 start.go:364] duration metric: took 336.917µs to acquireMachinesLock for "default-k8s-diff-port-676000"
	I0920 11:04:04.736262   11100 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:04:04.736281   11100 fix.go:54] fixHost starting: 
	I0920 11:04:04.737058   11100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-676000: state=Stopped err=<nil>
	W0920 11:04:04.737087   11100 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:04:04.745526   11100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-676000" ...
	I0920 11:04:04.749620   11100 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:04:04.749839   11100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a9:80:ee:af:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/default-k8s-diff-port-676000/disk.qcow2
	I0920 11:04:04.759135   11100 main.go:141] libmachine: STDOUT: 
	I0920 11:04:04.759206   11100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:04:04.759278   11100 fix.go:56] duration metric: took 22.99725ms for fixHost
	I0920 11:04:04.759294   11100 start.go:83] releasing machines lock for "default-k8s-diff-port-676000", held for 23.133875ms
	W0920 11:04:04.759460   11100 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-676000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-676000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:04:04.766661   11100 out.go:201] 
	W0920 11:04:04.770722   11100 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:04:04.770790   11100 out.go:270] * 
	* 
	W0920 11:04:04.773617   11100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:04:04.780638   11100 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-676000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (66.994709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.188169625s)

                                                
                                                
-- stdout --
	* [newest-cni-082000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-082000" primary control-plane node in "newest-cni-082000" cluster
	* Restarting existing qemu2 VM for "newest-cni-082000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-082000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:04:03.157560   11130 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:04:03.157696   11130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:03.157700   11130 out.go:358] Setting ErrFile to fd 2...
	I0920 11:04:03.157702   11130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:03.157831   11130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:04:03.158806   11130 out.go:352] Setting JSON to false
	I0920 11:04:03.174860   11130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5606,"bootTime":1726849837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 11:04:03.174928   11130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 11:04:03.180073   11130 out.go:177] * [newest-cni-082000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 11:04:03.187033   11130 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 11:04:03.187091   11130 notify.go:220] Checking for updates...
	I0920 11:04:03.194052   11130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 11:04:03.197056   11130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 11:04:03.200037   11130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 11:04:03.203004   11130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 11:04:03.206004   11130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 11:04:03.209406   11130 config.go:182] Loaded profile config "newest-cni-082000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:04:03.209681   11130 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 11:04:03.213932   11130 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 11:04:03.221007   11130 start.go:297] selected driver: qemu2
	I0920 11:04:03.221013   11130 start.go:901] validating driver "qemu2" against &{Name:newest-cni-082000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-082000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:04:03.221061   11130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 11:04:03.223436   11130 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 11:04:03.223460   11130 cni.go:84] Creating CNI manager for ""
	I0920 11:04:03.223496   11130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 11:04:03.223523   11130 start.go:340] cluster config:
	{Name:newest-cni-082000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-082000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 11:04:03.226946   11130 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 11:04:03.238552   11130 out.go:177] * Starting "newest-cni-082000" primary control-plane node in "newest-cni-082000" cluster
	I0920 11:04:03.242977   11130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 11:04:03.242990   11130 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 11:04:03.242996   11130 cache.go:56] Caching tarball of preloaded images
	I0920 11:04:03.243053   11130 preload.go:172] Found /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 11:04:03.243058   11130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 11:04:03.243124   11130 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/newest-cni-082000/config.json ...
	I0920 11:04:03.243630   11130 start.go:360] acquireMachinesLock for newest-cni-082000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:04:03.243659   11130 start.go:364] duration metric: took 22.834µs to acquireMachinesLock for "newest-cni-082000"
	I0920 11:04:03.243669   11130 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:04:03.243674   11130 fix.go:54] fixHost starting: 
	I0920 11:04:03.243805   11130 fix.go:112] recreateIfNeeded on newest-cni-082000: state=Stopped err=<nil>
	W0920 11:04:03.243813   11130 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:04:03.248003   11130 out.go:177] * Restarting existing qemu2 VM for "newest-cni-082000" ...
	I0920 11:04:03.254983   11130 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:04:03.255017   11130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:51:e1:85:89:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:04:03.256909   11130 main.go:141] libmachine: STDOUT: 
	I0920 11:04:03.256926   11130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:04:03.256957   11130 fix.go:56] duration metric: took 13.281083ms for fixHost
	I0920 11:04:03.256961   11130 start.go:83] releasing machines lock for "newest-cni-082000", held for 13.2975ms
	W0920 11:04:03.256967   11130 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:04:03.257003   11130 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:04:03.257010   11130 start.go:729] Will try again in 5 seconds ...
	I0920 11:04:08.259185   11130 start.go:360] acquireMachinesLock for newest-cni-082000: {Name:mk8b0da1be10810ec041054fbdcadda1496b405e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 11:04:08.259696   11130 start.go:364] duration metric: took 416.083µs to acquireMachinesLock for "newest-cni-082000"
	I0920 11:04:08.259830   11130 start.go:96] Skipping create...Using existing machine configuration
	I0920 11:04:08.259849   11130 fix.go:54] fixHost starting: 
	I0920 11:04:08.260577   11130 fix.go:112] recreateIfNeeded on newest-cni-082000: state=Stopped err=<nil>
	W0920 11:04:08.260603   11130 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 11:04:08.268880   11130 out.go:177] * Restarting existing qemu2 VM for "newest-cni-082000" ...
	I0920 11:04:08.271963   11130 qemu.go:418] Using hvf for hardware acceleration
	I0920 11:04:08.272182   11130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:51:e1:85:89:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19678-6679/.minikube/machines/newest-cni-082000/disk.qcow2
	I0920 11:04:08.282355   11130 main.go:141] libmachine: STDOUT: 
	I0920 11:04:08.282462   11130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 11:04:08.282566   11130 fix.go:56] duration metric: took 22.717ms for fixHost
	I0920 11:04:08.282585   11130 start.go:83] releasing machines lock for "newest-cni-082000", held for 22.866875ms
	W0920 11:04:08.282781   11130 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-082000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-082000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 11:04:08.290957   11130 out.go:201] 
	W0920 11:04:08.294203   11130 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 11:04:08.294227   11130 out.go:270] * 
	* 
	W0920 11:04:08.296648   11130 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 11:04:08.308781   11130 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-082000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (69.353417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-676000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (32.581334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-676000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.098625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (29.723042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-676000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (29.19825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-676000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-676000 --alsologtostderr -v=1: exit status 83 (41.341291ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-676000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-676000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:04:05.050340   11149 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:04:05.050493   11149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:05.050496   11149 out.go:358] Setting ErrFile to fd 2...
	I0920 11:04:05.050499   11149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:05.050619   11149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:04:05.050822   11149 out.go:352] Setting JSON to false
	I0920 11:04:05.050830   11149 mustload.go:65] Loading cluster: default-k8s-diff-port-676000
	I0920 11:04:05.051043   11149 config.go:182] Loaded profile config "default-k8s-diff-port-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:04:05.054444   11149 out.go:177] * The control-plane node default-k8s-diff-port-676000 host is not running: state=Stopped
	I0920 11:04:05.058488   11149 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-676000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-676000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (29.371416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (28.787542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-082000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (30.5875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-082000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-082000 --alsologtostderr -v=1: exit status 83 (41.906541ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-082000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-082000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 11:04:08.492440   11173 out.go:345] Setting OutFile to fd 1 ...
	I0920 11:04:08.492591   11173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:08.492594   11173 out.go:358] Setting ErrFile to fd 2...
	I0920 11:04:08.492597   11173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 11:04:08.492721   11173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 11:04:08.492949   11173 out.go:352] Setting JSON to false
	I0920 11:04:08.492956   11173 mustload.go:65] Loading cluster: newest-cni-082000
	I0920 11:04:08.493184   11173 config.go:182] Loaded profile config "newest-cni-082000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 11:04:08.496180   11173 out.go:177] * The control-plane node newest-cni-082000 host is not running: state=Stopped
	I0920 11:04:08.500030   11173 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-082000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-082000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (30.62125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-082000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (31.069542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.54
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
35 TestHyperKitDriverInstallOrUpdate 10.49
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 8.83
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.73
55 TestFunctional/serial/CacheCmd/cache/add_local 1.07
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.24
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.93
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.43
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 0.99
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.36
258 TestNoKubernetes/serial/Stop 1.94
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
275 TestStartStop/group/old-k8s-version/serial/Stop 2.09
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
288 TestStartStop/group/no-preload/serial/Stop 3.66
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
293 TestStartStop/group/embed-certs/serial/Stop 3.52
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.02
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
315 TestStartStop/group/newest-cni/serial/Stop 3.37
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 10:38:34.885444    7191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 10:38:34.885796    7191 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-195000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-195000: exit status 85 (98.905709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |          |
	|         | -p download-only-195000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:38:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:38:23.300929    7192 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:23.301083    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:23.301086    7192 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:23.301089    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:23.301252    7192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	W0920 10:38:23.301351    7192 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19678-6679/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19678-6679/.minikube/config/config.json: no such file or directory
	I0920 10:38:23.302708    7192 out.go:352] Setting JSON to true
	I0920 10:38:23.320841    7192 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4066,"bootTime":1726849837,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:38:23.320911    7192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:38:23.323980    7192 out.go:97] [download-only-195000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:38:23.324135    7192 notify.go:220] Checking for updates...
	W0920 10:38:23.324166    7192 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 10:38:23.327581    7192 out.go:169] MINIKUBE_LOCATION=19678
	I0920 10:38:23.332641    7192 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:38:23.336585    7192 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:38:23.339630    7192 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:38:23.342582    7192 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	W0920 10:38:23.347588    7192 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:38:23.347845    7192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:38:23.351588    7192 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:38:23.351607    7192 start.go:297] selected driver: qemu2
	I0920 10:38:23.351611    7192 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:38:23.351715    7192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:38:23.354566    7192 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:38:23.360078    7192 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:38:23.360187    7192 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:38:23.360244    7192 cni.go:84] Creating CNI manager for ""
	I0920 10:38:23.360291    7192 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:38:23.360345    7192 start.go:340] cluster config:
	{Name:download-only-195000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:38:23.364097    7192 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:38:23.368658    7192 out.go:97] Downloading VM boot image ...
	I0920 10:38:23.368675    7192 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0920 10:38:28.117652    7192 out.go:97] Starting "download-only-195000" primary control-plane node in "download-only-195000" cluster
	I0920 10:38:28.117678    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:28.184555    7192 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:38:28.184562    7192 cache.go:56] Caching tarball of preloaded images
	I0920 10:38:28.184742    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:28.189898    7192 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 10:38:28.189908    7192 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:28.279568    7192 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:38:33.582326    7192 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:33.582500    7192 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:34.277933    7192 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:38:34.278147    7192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-195000/config.json ...
	I0920 10:38:34.278167    7192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-195000/config.json: {Name:mk3e12fefb3ec8be2d7682ae7e0695fdf0524380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:38:34.278407    7192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:38:34.279249    7192 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 10:38:34.829778    7192 out.go:193] 
	W0920 10:38:34.840919    7192 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0 0x1068356c0] Decompressors:map[bz2:0x14000539d00 gz:0x14000539d08 tar:0x14000539cb0 tar.bz2:0x14000539cc0 tar.gz:0x14000539cd0 tar.xz:0x14000539ce0 tar.zst:0x14000539cf0 tbz2:0x14000539cc0 tgz:0x14000539cd0 txz:0x14000539ce0 tzst:0x14000539cf0 xz:0x14000539d10 zip:0x14000539d20 zst:0x14000539d18] Getters:map[file:0x1400150e610 http:0x14000b94550 https:0x14000b945a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 10:38:34.840946    7192 out_reason.go:110] 
	W0920 10:38:34.848753    7192 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:34.851793    7192 out.go:193] 
	
	
	* The control-plane node download-only-195000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-195000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-195000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-177000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-177000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.537564666s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 10:38:41.778062    7191 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:38:41.778120    7191 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-177000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-177000: exit status 85 (80.772375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | -p download-only-195000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| delete  | -p download-only-195000        | download-only-195000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT | 20 Sep 24 10:38 PDT |
	| start   | -o=json --download-only        | download-only-177000 | jenkins | v1.34.0 | 20 Sep 24 10:38 PDT |                     |
	|         | -p download-only-177000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:38:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:38:35.268456    7216 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:35.268580    7216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:35.268583    7216 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:35.268586    7216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:35.268727    7216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:38:35.269796    7216 out.go:352] Setting JSON to true
	I0920 10:38:35.286072    7216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4078,"bootTime":1726849837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:38:35.286134    7216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:38:35.291393    7216 out.go:97] [download-only-177000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:38:35.291508    7216 notify.go:220] Checking for updates...
	I0920 10:38:35.295292    7216 out.go:169] MINIKUBE_LOCATION=19678
	I0920 10:38:35.298360    7216 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:38:35.302366    7216 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:38:35.305384    7216 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:38:35.308373    7216 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	W0920 10:38:35.314334    7216 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:38:35.314526    7216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:38:35.317300    7216 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:38:35.317309    7216 start.go:297] selected driver: qemu2
	I0920 10:38:35.317312    7216 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:38:35.317357    7216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:38:35.320412    7216 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:38:35.325492    7216 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:38:35.325588    7216 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:38:35.325608    7216 cni.go:84] Creating CNI manager for ""
	I0920 10:38:35.325633    7216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:38:35.325639    7216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:38:35.325682    7216 start.go:340] cluster config:
	{Name:download-only-177000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:38:35.329143    7216 iso.go:125] acquiring lock: {Name:mk3af10b5799a45edf6eb5d92809da9193f6d956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:38:35.332326    7216 out.go:97] Starting "download-only-177000" primary control-plane node in "download-only-177000" cluster
	I0920 10:38:35.332335    7216 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:35.391164    7216 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:38:35.391188    7216 cache.go:56] Caching tarball of preloaded images
	I0920 10:38:35.391368    7216 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:35.396516    7216 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 10:38:35.396523    7216 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:35.496435    7216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:38:39.741876    7216 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:39.742073    7216 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:38:40.264166    7216 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:38:40.264357    7216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-177000/config.json ...
	I0920 10:38:40.264375    7216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19678-6679/.minikube/profiles/download-only-177000/config.json: {Name:mkbf16f4903e881818f4c113c0237484bb88d9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:38:40.264622    7216 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:38:40.264746    7216 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19678-6679/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-177000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-177000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-177000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-710000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-710000: exit status 85 (58.169125ms)

                                                
                                                
-- stdout --
	* Profile "addons-710000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-710000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-710000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-710000: exit status 85 (53.817167ms)

                                                
                                                
-- stdout --
	* Profile "addons-710000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-710000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0920 10:49:27.299117    7191 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:49:27.299242    7191 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0920 10:49:29.263529    7191 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0920 10:49:29.263785    7191 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:49:29.263838    7191 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit
I0920 10:49:29.812406    7191 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40] Decompressors:map[bz2:0x14000131840 gz:0x14000131848 tar:0x140001317f0 tar.bz2:0x14000131800 tar.gz:0x14000131810 tar.xz:0x14000131820 tar.zst:0x14000131830 tbz2:0x14000131800 tgz:0x14000131810 txz:0x14000131820 tzst:0x14000131830 xz:0x14000131850 zip:0x14000131860 zst:0x14000131858] Getters:map[file:0x1400581e9c0 http:0x14000a2d720 https:0x14000a2d770] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:49:29.812553    7191 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate477781736/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status: exit status 7 (30.049ms)

                                                
                                                
-- stdout --
	nospam-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status: exit status 7 (30.405833ms)

                                                
                                                
-- stdout --
	nospam-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status: exit status 7 (29.828958ms)

                                                
                                                
-- stdout --
	nospam-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause: exit status 83 (40.836125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause: exit status 83 (40.880792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause: exit status 83 (39.893167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause: exit status 83 (40.714458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause: exit status 83 (40.964292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause: exit status 83 (39.734291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-531000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (1.834743041s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (3.47316775s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (3.523090417s)
--- PASS: TestErrorSpam/stop (8.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19678-6679/.minikube/files/etc/test/nested/copy/7191/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1962510626/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add minikube-local-cache-test:functional-693000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache delete minikube-local-cache-test:functional-693000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-693000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 config get cpus: exit status 14 (29.104958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 config get cpus: exit status 14 (35.315291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (163.605875ms)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:26.986832    7784 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:26.986988    7784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:26.986992    7784 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:26.986996    7784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:26.987159    7784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:40:26.988433    7784 out.go:352] Setting JSON to false
	I0920 10:40:27.008430    7784 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4189,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:40:27.008500    7784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:40:27.014392    7784 out.go:177] * [functional-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:40:27.021419    7784 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:40:27.021470    7784 notify.go:220] Checking for updates...
	I0920 10:40:27.029288    7784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:40:27.032347    7784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:40:27.035364    7784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:40:27.038484    7784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:40:27.041361    7784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:40:27.044646    7784 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:40:27.044947    7784 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:40:27.049341    7784 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:40:27.055302    7784 start.go:297] selected driver: qemu2
	I0920 10:40:27.055309    7784 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:40:27.055356    7784 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:40:27.062409    7784 out.go:201] 
	W0920 10:40:27.066390    7784 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 10:40:27.070385    7784 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.04525ms)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:27.216698    7795 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:27.216809    7795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.216813    7795 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:27.216815    7795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:27.216948    7795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19678-6679/.minikube/bin
	I0920 10:40:27.218387    7795 out.go:352] Setting JSON to false
	I0920 10:40:27.235416    7795 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4190,"bootTime":1726849837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0920 10:40:27.235492    7795 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:40:27.240403    7795 out.go:177] * [functional-693000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0920 10:40:27.247275    7795 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 10:40:27.247338    7795 notify.go:220] Checking for updates...
	I0920 10:40:27.254373    7795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	I0920 10:40:27.255868    7795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:40:27.259315    7795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:40:27.262386    7795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	I0920 10:40:27.265366    7795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:40:27.268710    7795 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:40:27.268987    7795 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:40:27.273353    7795 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0920 10:40:27.280313    7795 start.go:297] selected driver: qemu2
	I0920 10:40:27.280319    7795 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:40:27.280361    7795 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:40:27.286406    7795 out.go:201] 
	W0920 10:40:27.290323    7795 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 10:40:27.294340    7795 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9066865s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-693000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image rm kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-693000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image save --daemon kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-693000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
I0920 10:39:44.585522    7191 retry.go:31] will retry after 5.023496638s: Temporary Error: Get "http:": http: no Host in request URL
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "49.9215ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.924542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "46.057584ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "31.608708ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012937791s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-693000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-693000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-693000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-854000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-854000 --output=json --user=testUser: (3.429001167s)
--- PASS: TestJSONOutput/stop/Command (3.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-504000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-504000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.033042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa8f5283-7f49-4404-a5e3-c179a00171de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-504000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"684ad860-fddb-46ec-b402-646274526517","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"2c28bbc3-7c6b-4097-ad51-49a7cc1757db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig"}}
	{"specversion":"1.0","id":"7a2d1432-af8e-467f-b72e-31c288518f0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"348f5463-13bd-49b5-ab55-8516d0384a08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9a468cdf-d973-49e2-818d-4a5460fb8172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube"}}
	{"specversion":"1.0","id":"6e3b9ccb-7b8b-4178-bc3f-a2c626d81577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1bf05db3-9b84-43e3-9f18-d6acd0e23f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-504000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-486000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (105.761209ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19678-6679/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19678-6679/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-486000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-486000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.63375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-486000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-486000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.660656791s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.702450708s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-486000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-486000: (1.938242458s)
--- PASS: TestNoKubernetes/serial/Stop (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-486000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-486000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.522167ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-486000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-486000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-423000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-048000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-048000 --alsologtostderr -v=3: (2.092863459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-048000 -n old-k8s-version-048000: exit status 7 (51.726125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-048000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-081000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-081000 --alsologtostderr -v=3: (3.657335s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (58.572792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-081000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-228000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-228000 --alsologtostderr -v=3: (3.522430833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-228000 -n embed-certs-228000: exit status 7 (55.336541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-228000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-676000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-676000 --alsologtostderr -v=3: (3.022438208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-676000 -n default-k8s-diff-port-676000: exit status 7 (64.90375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-676000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-082000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-082000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-082000 --alsologtostderr -v=3: (3.373681042s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-082000 -n newest-cni-082000: exit status 7 (55.777125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-082000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2428408509/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726853984761103000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2428408509/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726853984761103000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2428408509/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726853984761103000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2428408509/001/test-1726853984761103000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.398667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:44.820029    7191 retry.go:31] will retry after 284.208246ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.073125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:45.193690    7191 retry.go:31] will retry after 873.500812ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.846041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:46.156386    7191 retry.go:31] will retry after 1.345332235s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.0425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:47.591093    7191 retry.go:31] will retry after 863.189192ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.562042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:48.542248    7191 retry.go:31] will retry after 1.681444281s: exit status 83
I0920 10:39:49.611280    7191 retry.go:31] will retry after 7.781756608s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.201209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:50.312172    7191 retry.go:31] will retry after 4.846518335s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.302916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:55.244274    7191 retry.go:31] will retry after 2.862192574s: exit status 83
I0920 10:39:57.396391    7191 retry.go:31] will retry after 12.991058096s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.470292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p": exit status 83 (46.684542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2428408509/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2080772466/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.674708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:58.420443    7191 retry.go:31] will retry after 625.196853ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.557334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:59.135635    7191 retry.go:31] will retry after 500.401731ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.579833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:39:59.727034    7191 retry.go:31] will retry after 1.247918176s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.813125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:01.061202    7191 retry.go:31] will retry after 1.17143445s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.260167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:02.319283    7191 retry.go:31] will retry after 2.875265777s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.988292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:05.280897    7191 retry.go:31] will retry after 3.903346965s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.7045ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:09.270427    7191 retry.go:31] will retry after 4.059794665s: exit status 83
I0920 10:40:10.389790    7191 retry.go:31] will retry after 16.034812009s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.320125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p": exit status 83 (46.579292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2080772466/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (15.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (82.242667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:13.668981    7191 retry.go:31] will retry after 574.76049ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (84.100333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:14.330167    7191 retry.go:31] will retry after 460.339414ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (84.887458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:14.877716    7191 retry.go:31] will retry after 761.675987ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (86.668417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:15.728393    7191 retry.go:31] will retry after 1.221869836s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (86.245458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:17.038817    7191 retry.go:31] will retry after 1.503647866s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (87.935083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:18.632705    7191 retry.go:31] will retry after 2.894175271s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (87.445333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
I0920 10:40:21.616686    7191 retry.go:31] will retry after 4.827076036s: exit status 83
I0920 10:40:26.426870    7191 retry.go:31] will retry after 22.440605443s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 83 (87.420167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-693000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup715051315/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-189000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-189000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-189000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-189000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-189000"

                                                
                                                
----------------------- debugLogs end: cilium-189000 [took: 2.230598667s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-189000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-898000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-898000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard